Search
Close this search box.

Perspectives on Artificial Intelligence in Education

A conversation between an education futurist and an education strategist

Article
April 16, 2024

By: Jason Swanson

KEY TAKEAWAYS:

  • For generative AI, there seems to be two major chunks of opportunity, one associated with teacher productivity and creativity and the other being student tutors.
  • We should start thinking about AI not serving as a solution but as an accelerator of systems transformation. 
  • There has already been such difficulty in this country about what constitutes “truth,” and that is only going to accelerate with these tools.

an AL-generated image of a little Black girl using a laptop with glowing specks floating out of the computer, illuminating her face

Embracing innovation and staying ahead of emerging trends has long been a defining characteristic of the KnowledgeWorks approach to education systems change – it’s what sets us apart.

Artificial intelligence holds tremendous potential to revolutionize various facets of education, from personalizing learning experiences to creating administrative efficiencies, but it comes with a series of potential risks and critical uncertainties. Using strategic foresight to consider these future challenges and opportunities, KnowledgeWorks Senior Director of Strategic Foresight Jason Swanson sat down with Steve Nordmark, founder and consultant of Learning Community Insight, LLC, to glean valuable insights, identify synergies and explore how we might begin to proactively shape a future of education that is more adaptable, equitable and effective.

Jason Swanson: So Steve, we’re sitting down today to talk about artificial intelligence (AI), its applications and its potential impacts on education – perhaps cutting through a little bit of the hype and addressing some of the fears that are out there. Let’s jump right into it, shall we?

As somebody that spends a lot of time on thinking through how we might leverage technology to enhance teaching and learning, how do you see artificial intelligence integrating with our existing education systems?  

Steve Nordmark: A lot of what’s going on out there today is centered around generative AI. All AI isn’t generative, but it’s certainly what’s capturing most of our attention. There seems to be two major chunks of opportunity, one associated with teacher productivity and creativity and the other being student tutors.

From the teacher tool side of things, I’ve seen great supportive tools and opportunities for lesson planning and learning materials creation. Generative AI tools can quickly take prompts from educators and allow them to create full plans or assignment materials and do so quite effectively and efficiently. I think it will help with learning engagement – drawing more out of the learners by designing materials that more closely align to the learner’s needs and preferences.

For students, we’ve seen some of the math and writing tutor options enabled through generative AI, and we’ll continue to see more solutions across curriculum areas from vendors as they start to learn more about each learner’s capabilities, needs and preferences. What’s your perspective, Jason?

Could this advancement lead to a significant shift in traditional models or the emergence of entirely new approaches? 

Jason: From a futures perspective, we are always seeking to ask big questions around the longer-term impacts that could shift traditional education models. Might AI give birth to the emergence of an entirely new approach to learning? Maybe. 

I really appreciate what you said about efficiency. AI is doing cool things in that space and can free up teachers with almost a new division of labor between humans and technology. That alone isn’t necessarily transformational, but what might an educator be able to do with that new time dividend? Does that give them more space to connect to learners on a human level? Does this help with things like social-emotional learning, metacognition and “softer skill” areas because AI has given educators the opportunity to take things off their plate? I think for us the critical uncertainty here isn’t will this affect things, but in what ways will it affect things.

The way we have been thinking about this at KnowledgeWorks is in a dichotomy – AI as a tool or AI as a solution. And again, we are thinking about a decade out here. So, if we are thinking about shifts in teaching models or educational models, it certainly could happen. What we really need to start thinking through now is in what ways we can use AI as a tool to accelerate broader systems change. So, AI not serving as a solution, but as an accelerator.

We do have to deal with the challenge of student data privacy because that’s going to be a constant battle.
Steve Nordmark Founder and Consultant, Learning Community Insight, LLC

From your point of view, what impacts might this advancement have on personalized learning experiences for students? 

Steve: I don’t really see personalization as much as differentiation coming out of these tools. The greatest impacts that I predict would be on the teachers and in their planning and material creation. Just a lot of opportunity for creativity that didn’t exist before. Teachers will have more groups of options for curriculum support and learning experience design. The focus may not be specifically on personalization, but the degree to which individual educators can use these tools to create more meaningful and engaging learning experiences.

I’m working with a school in the Colorado area, and they’re really excited about what AI is doing for them from an instructional planning standpoint. With AI systems giving you more data, in real time, you have asset-based information about the learners you’re working with. Educators can make some very creative and powerful decisions that can help to “personalize” the experience. I think it’s going to take a while for a lot of these AI supported systems to gather enough data on learners to dive deeper into the personalized context.

But we do have to deal with the challenge of student data privacy because that’s going to be a constant battle. Security and data privacy issues are obviously very pronounced with younger learners, but it’s certainly a big issue across the board when you’re talking about public education. That’s just going to be an ongoing battle.

Do you see AI as a tool to address things like equity or access to quality education? 

Steve: Yes, is the short answer. As I think about this overall as a societal issue, I think it’s a moving target. We’re always going to have emerging tools like AI, which have the potential to give learners the opportunity to get access to materials, resources and supports they didn’t have in the past. But we also know there is huge potential for AI to accentuate digital inequity. There’s already digital inequity from bandwidth and access to computing devices and access to high quality software. And AI is not going to be any different. It’s just going to be another component in that digital equity conversation.

Jason: I think it’s likely to be made even more complicated by the fact that policies are obviously lagging technological advancements. So, what does it mean to promote a fair use policy from an educational standpoint so that access is not necessarily market-driven, but instead a public utility. What happens if one district partners with one ed tech company and another district partners with another ed tech company that isn’t as great? Is this going to further galvanize inequities? It makes me go back to that initial framing question of tool versus solution.

I think there is a systemic and societal dynamic that views technology as a panacea. We need to first reframe our own thinking about what it means for things such as learner agency and what we have to do in terms of changing the current systems and structures that make education inequitable before we start thinking about how AI might level the playing field in education.

Steve: One thing you brought up there, Jason, that’s important about AI right now, is that it’s commercially driven. It’s capitalistic. That’s what is driving innovation, it’s not for the “common good.” It’s large dollar investments by organizations that are trying to figure out how to corner the market and make the most profit. Public education is a social service in a capitalistic society, so they’re always going to be at odds. But to your point, AI is being driven much more for private enterprise than anything else. There are great organizations trying to build energy around that public good component and certainly the feds have started to introduce regulation and governance. The EU just introduced the first formal governance around AI that will put some teeth behind people trying to do business in the EU. It will be interesting to see what the U.S. does around it.

Jason: I could be wrong on the exact figure but, I believe that Open AI said that they were going to seek almost a trillion – with a T- dollars – in investment. I think that number clearly illustrates your point. If you can do some societal good, cool, my hope is that you do so, but the reality is that there’s a lot of money flowing into this space and with that amount of money comes some expectations.

But I’m pleased with what we’re seeing in terms of early signals from firms like Play Lab, who are creating things like a Generative AI sandbox for educators to create tools and build a community of practice. They are framing it as a public utility. It’s free. They are not collecting your data. They want you to build. They want you to put it out in the world and to tweak other people’s stuff and bolster that spirit of experimentation and innovation. I think this idea of a public utility is compelling.

What ethical considerations or concerns arise with the implementation of AI in the classroom? 

Steve: Well, obviously there’s a ton. We’ve touched on a few. Algorithmic biases we hear quite a bit about. The quality of the outputs you get from AI are only as good as the quality of the sources and inputs it’s trained on. If you have a tighter model and more controlled sets of information, then you’re going to get more predictable and higher-quality outcomes. Deep fakes. There has already been such difficulty in this country about what constitutes “truth,” and that is only going to accelerate with these tools. We touched on the influence of corporations and private investment. I think there’s a lot of ethical considerations in their usage of our data. Not just the ethical nature of what we’re getting out of AI, but what we are feeding it. Anthropomorphism. There is a distinct opportunity now where we’re going to start associating human qualities with technology. We are already seeing it. There are plenty of studies [1, 2, 3]  that have been done around Alexa and young kids assuming that Alexa is human and learning behaviors from it.

Jason: Yeah. We’ve got a Google house. When we go to bed, you know, my partner says, “Goodnight Google.” And when it does the things, like shut off the lights, she responds with, “Thank you.” It’s always good to be polite, I guess. But the tool is not looking for that level of gratification, right? When I think about this, one of the concerns I have, and it’s not so much an ethical concern, it’s about this feedback loop we might create.  

Imagine a student is given a writing assignment and goes to ChatGPT where they can essentially have the assignment done for them. They turn in said assignment and then the teacher uses a large language model (LLM) AI to assist in grading it. So, we’ve created this feedback loop where AI is essentially evaluating itself, or perhaps another algorithm? As humans we want to look for shortcuts and, in the best of times, a thought partner. But it makes me wonder: if so much of our education system is based on the evaluation and transmission of knowledge that prioritizes the written word, will technology dislodge the written word in favor of superpower levels of critical thinking?

If so much of our education system is based on the evaluation and transmission of knowledge that prioritizes the written word, will technology dislodge the written word in favor of superpower levels of critical thinking?
Jason Swanson Senior Director, Strategic Foresight, KnowledgeWorks

Steve: I think there is a lot of validity to what you are saying, Jason. There’s a distinct possibility that as these tools evolve, and because humans tend to take the path of least resistance, we will move away from the reliance we once had on the written word.

Back in 2007, I was just starting to think about bandwidth, the sophistication of software and the ability to quickly share information across the Internet. And I posted a video on my YouTube channel called the “Fatal Blow to Text Literacy,” and I was being very provocative at the time, because as I was reflecting – our alphabet was created because man saw inefficiencies in our ability to save and share thoughts. Now, as these tools become more sophisticated, our written word is likely to become less important because we don’t need to rely on it. We have storage capacity, we have advanced bandwidth and we have these advanced tools that allow us to exchange ideas, store ideas and share ideas in more sophisticated ways, even visually, that don’t involve the glyphs that we have relied on for our language.  

What do you see as AI’s influence on the world of work and what we prioritize within education? 

Jason: KnowledgeWorks took this topic on in a deep way in 2017 and I think that the insights there still hold true. We first modeled our big uncertainties at the time. What’s the level of potential for job reconfiguration or is it straight up human displacement? The other uncertainty was would we have a coordinated response? So would the feds, would the government, would the state look at this and say, “We need to seriously re-engineer our system to meet the new realities of work,” all the way to the more fractured, laissez-faire approach where some folks do this and other folks do that. The conclusion we came to was that no matter what future you’re looking at, there are some constants. From our forecast and our interviews, we saw that the soft skills are front and center. You’re going to need to know how to communicate. You’re going to have to show up on time. You’re going to have to be a problem solver, self-starter or self-learner and critical thinker. You’ll probably have to have some level of AI ethics and the ability to use AI. It’s not an overstatement to say that the future of humanity is going to be characterized by man and AI in symbiosis.  

What does it mean to weave these new sets of considerations into our education system? I think what starts to get hard is when we myopically look at the future of work as if it is a single point and perhaps much closer than we realize. Rather, we need to recognize that there are multiple futures of work. Education, at its best, is a portal into the future. But it’s a portal into a world that we know nothing about, so educators are tasked with getting young people ready for jobs to the best of their knowledge. But in all these futures, it goes back to that set of core readiness traits. When we look at the guiding purpose of education, it’s to equip learners with the skills, knowledge and dispositions that one would need to thrive in each future.

There are skills that are constant, that are timeless for any job. I think it will put a higher value on the metacognitive and durable skills and it will create less value for secondary degrees. The affordances of digital tech writ large could mean you could go on a self-learning journey, capture it, get it credentialed without spending a ton of time and expense. And I’ve been surprised to see workplace learning ascend when it just didn’t exist before. Companies are investing back into their workforce in ways that I’m finding surprising. I think that it’s going to create some turbulence on the post-secondary side of things.

The soft and/or durable skills will start to rise to a greater degree – prioritizing, what makes us uniquely human.
Steve Nordmark Founder and Consultant, Learning Community Insight, LLC

Steve: Yeah. And I really appreciate your perspective on that. Several things popped into my mind. I definitely agree, number one, with your perspective about higher ed. I think that will significantly depress the context of what higher ed is and the way it’s structured. And I think it will be much more experiential-based. Everything shouldn’t serve the business interest, but in some sense, it’s going to have to. I mean, that’s what’s driving what we do. I think there will be more partnerships and more real-world experiential learning. My son has been fortunate enough to participate in a sports medicine program and as opposed to just abstractly taking, say, math and science, his learning is much more applied to the specific context of the work he will be doing, and that appeals to his sensibility. It appeals to my sensibility in the way that we can remodel education. 

I 100% agree that the soft and/or durable skills will start to rise to a greater degree – prioritizing, what makes us uniquely human.

Like, what the Carnegie Foundation is doing to dismantle the whole Carnegie unit and hopefully getting rid of the rudimentary nature of the curriculum that’s available, especially in high school. This would allow space for us to develop those critical thinkers and problem solvers and communicators. As opposed to specifically just learning this math, or this science, we instead start to think about it more in a real-world context. One of my biggest frustrations when I went from high school into my engineering studies was that there were very few opportunities, ironically, to solve problems in a lab or working with real-world problems. I mean, there should be no discipline that I could think of in a higher ed setting that should demand that type of learning more than engineering, but most of what I received was all textbook-based. I didn’t get as much out of that education as I should have or could have, had it been oriented more towards solving real world problems.

The featured image was created by Adobe’s artificial intelligence. Learn more about how Adobe trains its AI ethically.

THE AUTHOR

Jason Swanson
Senior Director of Strategic Foresight

Related Resources

Looking ahead 10 years to explore possibilities affecting math education.

Learning communities across the country share what drives educators in peer learning opportunities.

Jillian Kuhlmann
Senior Manager of Communications

With the support of their teacher, kindergarteners in this Arizona elementary school understand the standards they’re working through.

Jillian Kuhlmann
Senior Manager of Communications

Menu

Search