As powerful AI  tools are deployed in classrooms nationwide, what should parents and educators know about the risks and opportunities specific to children using them in educational settings? Children and Screens held the #AskTheExperts webinar, “New School? Promises and Risks of AI in the Classroom” on Wednesday, March 19, 2025.

Panelists discussed the intersection of AI and children’s learning, including how children process information from AI-driven tools, how AI can best be used to support children’s cognitive development, and what this means for education. They also discussed the use of AI in the classroom, lessons from the broader edtech field, and best practices for ethically integrating AI into educational systems. Additionally, the panel explored AI literacy with a focus on the key skills that parents, teachers, and children need to navigate the rapidly changing digital landscape.

Speakers

  • Mark Warschauer, PhD

    Distinguished Professor of Education, University of California, Irvine
    Moderator
  • Judith Danovitch, PhD

    Professor of Psychological and Brain Sciences, University of Louisville
  • Adam Dubé, PhD

    Associate Professor of Learning Sciences, McGill University
  • Mathilde Cerioli, PhD

    Chief Scientist, everyone.AI
  • Amanda Bickerstaff, MS Ed

    Co-Founder and CEO of AI for Education

00:00:11 – Introductions by Executive Director of Children and Screens Kris Perry.

00:01:50 – Moderator Mark Warschauer on what generative AI is and how it is relevant to children’s learning.

00:07:18 – Judith Danovitch on how cognitive development impacts children’s trust in and learning from AI.

00:15:39 – Moderator follow-ups: At what age should children explicitly learn about AI and start using it for educational purposes? Can using AI negatively alter children’s cognitive development?

00:18:43 – Adam Dubé on taking a critical approach to implementing generative AI in education

00:27:47 – Moderator follow-ups: How do children view AI? At what age is use of AI in the classroom appropriate? Could generative AI be a positive tool if we teach students to critically judge their responses?

00:33:52 – Mathilde Cerioli on ethical issues surrounding the use of AI in classrooms.

00:42:38 – Moderator follow-up: How might AI tools and chatbots alter children’s socialization?

00:45:11 – Amanda Bickerstaff on the importance of teaching children AI literacy.

00:55:29 – The panel addresses questions from the audience.

00:56:42 – Q&A: How can we weigh the risks and benefits related to integrating AI into education?

01:03:27 – Q&A: Should parents be more involved in making decisions about allowing their children to use AI in school, compared to other technologies?

01:10:31 – Q&A: How does AI affect learning goals and the skills youth develop?

01:17:40 – Q&A: What might we gain or lose from using generative AI as tutors that are constantly available to children?

01:24:18 – Wrap-up with Children and Screens’ Executive Director Kris Perry.

[Kris Perry]: Hello and welcome to today’s Ask the Experts webinar, New School: Promises and Risks of AI in the Classroom. I’m Kris Perry, Executive Director of Children and Screens. Artificial intelligence is rapidly reshaping the way children learn, interact, and process information. With AI tools making their way into classrooms, educators and parents are faced with critical questions: How does AI impact children’s cognitive development? What are the best ways to integrate AI into education systems responsibly? And what key skills do students and teachers need to navigate this evolving landscape? Today’s discussion will explore the promises and risks of AI in education, drawing from lessons in the broader edtech field, and will provide best practices for ethically incorporating AI into classrooms. Our expert panel will break down how children process information from AI driven tools and how AI can best be used to support children’s cognitive development and what this means for education. Now, I would like to introduce you to today’s moderator, Dr. Mark Warschauer. Dr. Warschauer is a Distinguished Professor of Education and Director of the Digital Learning Lab at UC Irvine, with affiliated appointments in informatics, language science, and psychological science. He is one of the most widely cited scholars in the world on digital learning topics such as computer-assisted language learning, digital literacy, the digital divide, laptops in schools, and artificial intelligence in education. Welcome, Mark. 

[Dr. Mark Warschauer]: Thank you very much, Kris. It’s a pleasure to be here. And thank you to Children and Screens for organizing this important session. And thanks to all of you for coming out today to join us for discussion of this important topic. I’m going to offer a few words of introduction and then turn it over to our terrific panelists. Generative AI is what is known as a general purpose technology, much like writing, the printing press, the steam engine, electricity, and computers and the internet. These transformative technologies drive innovation across all sectors of the economy and society, making it essential for both individuals and nations to master and adapt them. Not surprisingly, such technologies shape learning and childhood in both positive and negative ways. Electricity illuminated schools and homes, but also risked electrocuting people. The internet placed a world of knowledge at children’s fingertips, yet exposed them to misinformation and harmful content. So what will AI bring? Let’s start with a couple of definitions. AI refers to technology that allows computers to mimic human thinking, problem solving, and learning. It’s the kind of technology that powers voice assistants, like Siri or Alexa; chat bots that frustrate you when you call a service center; or personalized recommendations for Spotify or Netflix. AI applications and technology typically refer to two types. Generative AI – like ChatGPT, which can create text, images, and even code – and adaptive AI, which adjusts to a student’s progress, helping to personalize learning – think of platforms like Dreambox or Khan Academy’s AI-driven suggestions. These two often work together. For example, the way that Khan Academy’s Khanmigo provides adaptive learning is by wrapping an interface of hidden prompts into a back end that communicates with generative AI. AI has countless applications for education. It can personalize learning, providing students with extra support tailored to their needs. It can help teachers by automating tasks like grading and lesson planning. And for students with disabilities, AI-driven tools can offer new ways to access learning. Yet AI also has a lot of dangers. It can be used to create deep fakes. It can wildly hallucinate. It can be biased. And most importantly, it can substitute for children’s own writing, thinking, and learning. Think of this quotation expressing these concerns. “This invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing produced by external characters, which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding, and you offer your pupils the appearance of wisdom, but not true wisdom, for they will read many things without instruction, and will therefore seem to know many things when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.” Now who do you think said these things? Do you think it was Bill Gates, Elon Musk, Mark Zuckerberg, or perhaps ChatGPT? Take a moment to think about that. Okay, let’s go ahead and look. It was actually Socrates speaking more than 2400 years ago, quoted by Plato in 370 BCE. And Socrates and Plato were obviously not speaking about AI, but about the invention of writing. I share this to remind you that while AI carries both benefits and risks, it is unlikely to singlehandedly destroy our children’s minds any more than writing, the printing press, or computers and the internet. However, like these other tools, it powerfully impacts how we communicate, learn, and think. So it’s incumbent upon us to critically consider how it can be ethically and responsibly used in our children’s education. To help inform this question, we’re fortunate to have an outstanding panel today ready to shed light on AI’s relationship to cognition, its education use cases, and best ways to integrate it into the classroom. To kick off our panel. I’m pleased to introduce Judith Danovitch, Professor of Psychological and Brain Sciences at the University of Louisville. Dr. Danovitch earned her PhD in Psychology from Yale University in 2005. Her research focuses on how preschool and elementary school children think about knowledge and expertise, and how they evaluate different kinds of information sources. She has published over 50 articles in scientific journals and has written for The New York Times. As a parent to a screen time loving child, Dr. Danovitch aims to translate what the research says into practical advice for caregivers and educators. Judith. 

[Dr. Judith Danovitch]: Thank you for that introduction. So I’m going to start off by talking about how children are exposed to technology and the internet, literally now from the time that they’re born. And these technologies are growing increasingly interactive and sophisticated. So, for example, digital voice assistants, like Amazon’s Alexa or Google Home, have enabled even very young children, who cannot yet read or write, to search for and even obtain information from the internet. And children are being increasingly exposed not only to these technologies, but to AI-driven technologies both at home and in their classrooms. And so I’m going to talk a bit about what the research in cognitive development can tell us about how children might respond to and how they might potentially learn from these AI-driven technologies. So I’m going to talk about three factors that might influence children’s trust in and learning from AI. The first one is the accuracy of the information source. Then I’ll talk briefly about – sorry – familiarity with the source. And the nature of the source’s responses and how human-like they seem. So again, I’m just going to give you all a quick overview of what the research on child development says about these things. So, in terms of accuracy, we know from about two decades of research in developmental psychology that children are paying attention to and use an informant’s history of prior accuracy to judge potential informants. And so the classic way that this research has worked is that children see, or sometimes they watch a video, of two people who are naming objects. So, for example, in this case, one might name the object, you know, correctly and say, “This is a ball.” And the other is incorrect. And again, this is designed so that even young children as young as age three know that they’re incorrect. Children see this happen a couple of times, repeatedly. Again, one person is always right. The other person is always wrong. And then they’re given a situation like this, where there’s an object that they don’t know the name of, and each person then gives a different answer to what it’s called. Right? So one might call it a wug and the other calls it dax. And again, what we know and has now been replicated many times is that even three year olds are paying attention to this, and they will then trust and learn from the person who has had a history of giving accurate information over one who hasn’t. Back in 2013, one of my students and I extended this work to see if children would apply this principle to technology as well. So we showed them two computers, where one was consistently giving the right answer to a question, and one was not. And then again, we gave them a novel question – something that they wouldn’t know the answer to. And we found that, again, children as young as age four were able to rely more on the computer who had previously given accurate information. More recently, my collaborators and I have been extending this to things like websites, as well. And this is important because it shows us that children can apply the principles that they use to judge other people to judge new technologies, even if they’re not extremely familiar with them. Speaking of familiarity, another principle that children use to judge information sources is how familiar they are. So, for example, children will trust their teacher over a teacher who they’ve never met before if both of them are giving them information. More recently, my collaborator, Lauren Girouard-Hallam, who’s currently a post-doctoral fellow at the University of Michigan, and I have looked at familiarity with search engines. So we showed children ages four through eight the Google search engine, and we asked them a little bit about it. And by the way, even four-year-olds are familiar with Google, for the most part. And then, we asked them about their trust in and their preferences for Google over a human in terms of getting information. We then repeated this with a novel search engine that we made up. We called it “Anu”. We describe it in very much the same way as Google. And again, we asked children to what extent they trusted that information source. And what we found, this is a graph from our research is, as you can see here on the– if you’re looking at the right end of the graph that the older children in our sample of seven- and eight-year-olds for the most part trusted both search engines, Google and the new one, at very high rates. However, for younger children, for our four- and five-year-olds, you can really see on the left side of the screen a real gap, where they are showing a great deal more trust in Google than they are in this novel search engine. And we believe that this is because Google is familiar to them. They’ve heard about it. They’ve seen people use it. Even if, you know, a lot of four-year-olds haven’t necessarily used Google themselves, just the fact that it’s out there and it’s in their environment makes them seem to be more trusting of it. And then finally, the third factor that I think we need to be paying attention to when we’re thinking about how children interact with AI and what they make of it is the kinds of responses that the technology or the AI-driven devices give them. So we know from, again, decades of research that the more human-like the responses, the more likely children are going to be to trust and learn from a technological source. And, you know, one example of this right now is, voice assistants and smart speakers who do things like, in essence, remember who they’re talking to. And my understanding is that they’re just getting better and better on a daily basis. So for example, they’re very conversational. They can ask children, you know, “Would you like to hear your favorite song?” They can remember the child’s name in some cases. And doing this, having this conversational nature, is very likely to result in children forming what psychologists call parasocial relationships. Parasocial relationships are things that seem like a real social relationship, but they really only go one way. The original research on this was with television characters, right? And children would kind of feel like they had formed a relationship, or a friendship, with a character on the screen. And the stronger that kind of relationship, the more children are going to be likely to want to interact with these things, and to listen to them. Another thing that these devices can do now is they can also – they’re starting to at least – be able to acknowledge uncertainty or even say things like, “I don’t know, I can’t answer that question.” And I think, again, we need to pay attention to that, because for young children, we know from research on their interactions with people, that’s actually a valuable cue to reliability. And so, when a human being says to you, you know, “I can’t answer your question,” when they don’t know the answer, you’re actually more likely to trust them than someone who has, you know, confidence in their answer when they shouldn’t. So, for example, someone who claims they know what’s in a box even though they haven’t looked inside it. Children are very sensitive to that. So just to wrap up, these are three factors that, again, we know from the research on cognitive development that children are paying attention to and that these are likely to be important when we’re judging how they’re going to react to AI and the kinds of information or teaching that AI may have to offer them. And I just want to add one more thing, which is that on top of all this, we also need to be thinking about the child’s age and their experience. And so in all of this research, age and experience matter. Children’s judgments are changing rapidly. And again, we shouldn’t underestimate the fact that even three- and four-year-old children are making these kinds of judgments, and are potentially having experience with AI. And we need to start thinking about that early in their development. Now, I will stop there.

[Dr. Mark Warschauer]: Thank you, Judith, for those very thoughtful questions. A person in the webinar had a question that’s really related to the last point you made that I’d like to follow up on. So we know from things like calculators and word processing that the suitability of technology varies very much by children’s age. So you hinted that age is an important point at the end, but really, at what age do you think children should start explicitly learning about AI and start using AI for educational purposes? 

[Dr. Judith Danovitch]: That’s a great question. In terms of explicitly learning and talking to children about AI, I actually think as early as possible. In the same way that, you know, I think a lot of adults might think like, “Oh, well, four-year-olds aren’t going to know what Google is.” Well, yes, they don’t know what it is, but they are being exposed to it. They hear other people talking about it. And so, if it’s there, present in the environment, I see no reason why we shouldn’t start talking about it. Right? Why adults shouldn’t say things like, “Well, I’m going to use this technology, I’m going to use this AI, you know, to solve this kind of problem because it’s really good at solving that kind of problem,” or, “I’m not using it because, you know, it’s probably not going to give me the right answer.” You know, children learn a great deal from modeling and from parent-child conversations. And so I really believe that we should start these at home, and potentially also in the classroom, very young, a lot younger than I think we do right now.

[Dr. Mark Warschauer]: And are you concerned at all that using AI can alter people’s– children’s cognitive development in negative ways?

[Dr. Judith Danovitch]: That’s– I think that’s the million dollar question. Right? I wish I had an answer to it. I think that we really need to look carefully at this. And when we say “alter development”, I think we also need to break down, “What does that mean?” And you know, I’m a firm believer that there are both risks and benefits to children using technology. And that what we really need is to better understand these things. 

[Dr. Mark Warschauer]: Thank you. Thank you so much, Judith. 

[Dr. Judith Danovitch]: Thank you.

[Dr. Mark Warschauer]: I’m going to go on and introduce our next speaker. So, Adam Dubé, is an Associate Professor of Learning Sciences in the Faculty of Education at McGill University. He was the 2020 McGill Faculty of Education Distinguished Teacher Award recipient, the 2021 EdTech Leadership Award recipient, and an Early Career Fellow of the American Educational Research Association and the Society of Research and Child Development in Middle Childhood Education and Development. He investigates how technology augments the learning process with research on children’s theories of artificial minds, how AI can improve game-based learning, and whether educators prefer educational apps powered by generative AI. Adam.

[Dr. Adam Dubé]: Thanks, Mark. So I’m Adam Dubé. I’m at McGill University up in friendly Canada, and I’ll be talking to you about a middle path for using generative AI for education. And when I talk about this, what I’m talking about is not being an advocate, saying that we must adopt these things to transform classrooms and not being an opponent, saying we can’t use these things whatsoever. Instead, we need to take a critical approach to the use of these technologies and asking, “What is their purpose, and how do we know if they are actually working?” So, I’ll– that’s what I’ll be talking about today. And to answer those questions, I look at the broader landscape of technology and education. You know, new technologies have always been predicted as being able to “fix” education. There’s too few teachers. There’s not enough time. We need to engage students via meaningful learning. Unfortunately, a lot of these predictions are often wrong. On the left, here are a bunch of the predictions that were made over the last decade about individual technologies that were supposed to “fix” education. We looked at these and found that many of them were never adopted, and if they were, they weren’t adopted for 5 or 7 years later from the original prediction. The reason these predictions were often wrong is that often they’re driven more by companies and market forces than the realities of classrooms and of learners. Now, also, a lot of technology that we purchase for classrooms doesn’t really work. We adopt technologies that aren’t designed for education into classrooms. We don’t require strong enough evidence that it’s working before we buy it, and we don’t evaluate if it’s working once it’s actually adopted. And so what we need to do with technology is, first off, and with generative AI in education, is we can’t assume that it’s going to fix education. We have to demand evidence that it will work and that it is working once we buy it. So let’s talk about the state of generative AI in education. Back in 2023, the Stanford AI index report was published, and it showed that 63% of K-12 educators had used generative AI in some form, at least once for their teaching practice, and this was just seven months after it had launched. Furthermore, 88% of teachers thought that generative AI would have a positive impact on learning, and 76% thought that incorporating ChatGPT was important. Now, interestingly, these positive attitudes of teachers were more positive than the positive attitudes towards generative AI of their students. And so this was in 2023, before these things were widely rolled out and had implications for classrooms. What do we know now in 2025? Well, we’ve been doing some research on asking educators to evaluate educational apps that are powered without or with generative AI. And what we’ve learned is that in our research, about just over 50% of educators report that they’re using AI in their teaching practice. And in terms of whether or not they prefer apps powered by AI, about 30% prefer these types of apps, 20% don’t prefer them, and about 60% are really indifferent. And this is a very different picture than the overly positive early indicators that we saw back in 2023, where teachers were generally adopting it early and had a very positive attitude. So what that means is that educators need information about how generative AI should be used for education and the ability to evaluate these products themselves. So let’s talk about that. Now, one of the things that I like to talk about is that there are thousands of generative AI educational apps being made, new ones being made all the time. Now it’s really hard to identify individual apps. So instead we have to talk about common uses of generative AI in education. And I like to group them into three common uses. They can be thought of as a resource, as a tutor, as a tool. And I’ll tell you about each of these in turn. Now, importantly, each of these different uses has a different purpose in the classroom and a different criteria that educators can use to evaluate whether or not they want to adopt them. Now, we’ve been doing some work over the last year to see what research is published on these different uses, and the most common use is, “it as a tool.” And importantly from this work is that there’s most of this research is done in higher education and relatively less research is done in K-12 education. Importantly, also, the question is whether or not these things seem to be working based on this early evidence over the last couple of years. Well, the evidence is really mixed. The largest number of studies show positive results, but an equal number of studies showed negative, null, or mixed results. So what that means is that there isn’t a clear answer about whether or not generative AI “works” in educational setting. Instead, educators will have to make informed decisions for themselves, for their classrooms, for their students. So how do they do that? Well, let’s talk about different types of generative AI apps. You can use them as a resource. Now, what’s a resource? That’s just a source of information for students. Like a textbook used to be a resource, nowadays, an online textbook or a search engine can be a resource for knowledge. So you can have a chat GPT search engine. Now, should you use it? Well, the criteria should be that it is more accurate, has a broader breadth of knowledge, and it’s easier to use in the existing resource. Otherwise, we shouldn’t really be adopting these tools. Now, what ChatGPT search systems exist out like this. Well, here’s an example from the Library of Congress, which right now you can search with a regular old search engine. But some researchers from the University of South Florida are building a generative AI-powered search engine that makes it easier to search, and it produces answers about the content and the Library of Congress that’s accurate and in a more conversational tone. And so that is an expert-developed, education focus-driven search engine. What about just using any old ChatGPT search engine as a resource? Well, research is coming out showing that these things tend to provide inaccurate information at a rate that we might not want to be accepting. So here’s an example of a study that was looking at whether or not the search engines could properly cite correct information, and all the right words or incorrect responses and everything that goes up is when these systems were constantly providing incorrect information. And right now, what we’re seeing with these general purpose search engines, like ChatGPT, Perplexity, Grok, and Gemini, is that they will confidently provide incorrect information to the user. And that should cause us concern when we need to use these as resources in classrooms. So, the other thing you can use these for is, “as a tutor.” Now the purpose of a tutor is to select, explain, and discuss information with students. And that tutor should be accurate. They should be able to adapt what they’re explaining to the students’ needs, and they should be available in terms of cost and time of day. Now there’s an extra thing that we should consider when we’re thinking about generative AI tutors. And that’s, “What’s the difference between learning from a technology versus a human tutor?” Judith’s talk earlier gave us some great guidelines to think about how students might be learning from these technologies. The other issue is that these tutors are just as likely to provide confident, inaccurate information to students. So maybe we want to hesitate before we wildly adopt these things into classrooms. And then the other point is that my own work, is we ask students how they believe AI systems think. And young children between 4 to 8 years of age are reporting that they think they exist somewhere between a computer and a human being, which means that these students need more conversations about AI. They need digital literacy before we start telling them to learn from these things and using them as tutors. And then that final use is, “think of generative AI as a tool.” Now we use as a tool is just helping students complete a task. Like a student say, for example, is told to read a book and then produce a video about that book. It’s a common assignment. Now, you can use a generative AI video editor in that class, but should you do so? Well, the question you should ask yourself, “Well, how well does it perform the task? How well does it edit the video?” And more importantly, “Do you care if the student learns how to edit video?” Because one of the things about generative AI tools is, a lot of times is that they do the steps for the students, who are doing the work on your behalf. So you’re not really going to learn how to perform that task. It’s just going to be done for you. Now, that’s one question you could ask here. Another is when you’re using these things are what we call mind tools, where the purpose of using these things is to help engage students with the thinking of a discipline, like teaching them how to write. And the idea is that using these tools should help them think like a writer. Unfortunately, we have to be wary of tools that think on students’ behalf. So here’s an example of a generative AI writing assistant, where it advertises itself as, “Write an intro for my project.” Not, “Help me write an intro, but I will do it for you.” And there’s too many of these AI writing systems out there that are aiming to do the brainstorming, do the outline, do the writing on the students’ behalf. And we should be wary of those uses. And so I’ll finish with technology doesn’t fix education. Generative AI doesn’t fix education. Instead, expert educators making informed decisions about what makes it into their classroom will. And so, that’s all I have for you. And if you’re interested in learning about any of our work, you can check out our website. 

[Dr. Mark Warschauer]: Thank you. Thank you, Adam, for those very astute observations. A couple of interesting– 1 or 2 interesting questions from the audience. First of all, what do we know about how children view AI, their feelings about it and experiences with it? Are they afraid of it? Are they in awe of it? Or do they just take it for granted because they’ve grown up with it? 

[Dr. Adam Dubé]: Yeah, these are great questions. And so we’re actively doing research on this. And in our studies, you know, like I was saying, children tend to– we ask them how they– what they think AI is. Like, how does it generate answers, how does it know what it knows? And the answers they’re giving us are a mix between the response like a computer would give you an answer like, let’s say a Google system, but it’s also like a human being. And so in their minds and their conceptions, there’s this sort of new sort of category of being that they’re kind of creating. And that’s something that we want to help them navigate. Say, for example, with Judith’s comment earlier about having early conversations about these systems. And then some previous work, children tend to have generally friendly conversations with say, for example, they use Amazon Alexa in their house. Parents report that children tend to interact with these things in a friendly manner. They tend to comment that sometimes they’re asking these things to tell them a joke. And so they usually have a somewhat positive relationship with these systems, the most common response we’ve seen in our research. So that sort of almost social, friendly social  relationship that they’re developing with these systems. 

[Dr. Mark Warschauer]: Thank you, Adam. And another important audience question. A lot of people are very concerned about age of appropriateness. And Judith sort of talked to the point of explaining to children what AI is and helping them understand what it is. But let’s go beyond that to school settings where educators are actually choosing among these many tools and implementing them consciously in the classroom. What ages do you think teachers should be selecting and using generative AI tools to help students’ learning? 

[Dr. Adam Dubé]: This is a really great and important question. I think it talks about what role the generative AI plays. I think if the student is interacting with the generative AI, say, for example, they’re speaking with it and talking to it, either via text or with their voice. Then we have to really consider children’s ability to engage in complex perspective taking and thinking about the mind of, and or, how this system works, say, for example. So children typically don’t develop more complex perspective-taking until eight years, ten years of age. So we might want to be waiting until children can reason about the systems that they’re interacting with until a little bit older. Now, if there’s other systems where generative AI is in the background, say, for example, and it’s just producing content or completing a task, and the student–and the child doesn’t have to interact with it directly. Then perhaps maybe there’s less of a concern about how the student’s interpreting and making sense of these systems.

[Dr. Mark Warschauer]: Thank you. And I have one more question of my own. You know, they say that the best way to get information on the internet is not to ask a question, but to say something wrong. And when you say something wrong, you know, everybody’s going to give you the right information. You talked a lot about the danger of the fact that generative AI is often wrong. But if we teach students to learn about that and know that and to critically judge the responses from generative AI, can we actually turn that to a positive learning thing? Does generative AI have to be perfect to be a good tutor? 

[Dr. Adam Dubé]: No. Human beings aren’t perfect as tutors. I think Judith talked about the fact that– but it’s important that human beings recognize the limitations of their knowledge, and these systems tend to be confidently providing incorrect information. And even– and they can become obstinate in that confidence. Or they will say they know the answer. You try to correct them, and they still say they know the answer sometimes. And so that’s an interesting piece. The other thing we have to consider here is, I agree with you that being able to identify when these things make a mistake, and then engage with conversations around that mistake, that’s a– there’s a lot of opportunity there for learning. But we have to think about the age of the learner. I think the student has to have a level of understanding and expertise to be able to engage in that type of Socratic dialog, or back-and-forth dialog, with these systems. And whereas younger students don’t have the background expertise to be able to do so. And when I see recommendations that we should allow students to learn from these systems and identify the mistakes, a lot of times those recommendations are made when people are thinking about, and they themselves are already experts on topics. And so when they’re using these systems, they recognize when something doesn’t make sense and they go, “It’s like, well, that was a signal to me that I needed to go deeper.” But do young learners who don’t have backgrounds and this understanding, are they able to do that in the same way? And for me, that’s a whole interesting research space that we can get into, going forward. Yeah. So I completely agree. I think it’s really interesting, but I’m more aware with younger students with less expertise, who can’t reason about the system, as opposed to older students, who we could give those instructions and who already have background knowledge, like you’re talking about, Mark. 

[Dr. Mark Warschauer]: Thank you so much. Okay. Mathilde Cerioli, PhD, is Chief Scientific Officer and Co-Founder at everyone.AI, a nonprofit focused on educating about the opportunities and risks of AI for children. She holds a PhD in Cognitive Neuroscience and a Master’s Degree in Psychology, and her research focuses on the intersection of AI and cognitive and social-emotional child development. Within everyone.AI, and in partnership with the Paris Peace Forum, she contributed to the launch of the Beneficial AI for Children Coalition, an international multi-stakeholder initiative that aims to orient the development, deployment, and adoption of beneficial AI for children and adolescents. Mathilde. 

[Dr. Mathilde Cerioli]: Hi. Thank you. Thank you so much for having me. So, I want to talk a bit more today on how we integrate AI into the educational system, and also, what are really the ethical considerations, and how do they really translate? Because we talk about ethics in AI a lot, but concretely in classrooms with children, what does those concept mean? So before we get in there, I want to just take a step back on why we should have, also, a specific focus when we do research on AI and the impact it has on us as humans. The first one is that children are more vulnerable to their environment because they learn a lot, and who they become depends in part due to their genetics, but their environment will also shape the experiences they have. And AI is coming quickly. It’s becoming ubiquitous. It’s a bit everywhere in their environment. And so we need to really think of, “Is the AI that’s in that environment going to promote learning, promote development, or is it going to replace and automatize some of the tasks that we want them to learn?” And this is specifically important for children because they go through what we call a “sensitive period of development.” So for instance, children all around the world learn to speak a language at the same age, because that’s the moment their brain is primed to learn language. And when we learn outside of those critical periods, the learning still happens, but not to the same extent. That’s why, like when I’m speaking in English, I still have an accent because I learn outside of this critical period. So, we want to make sure that when they are learning language, when they are learning social skills, interaction, empathy, theory of mind, they have a great deal of experiences that promote that. Now, when it comes to education, I think it’s important to remember that school– being in school is about learning a lot more than academics. So AI won’t be able to teach everything to those children. And it wouldn’t even be where we should go to, because that’s the age where they learn to be with other adults than their parents. That’s when they learn perspective taking. That’s when they learn to be at recess, take turns, to learn to be frustrated with others, how to reconcile. So when we think about AI, we’re also going to think, “When do we want to use it? Where can you actually help? And how do we make sure that we keep the interactions that are essential for children?” The good thing is that learning in the brain is just pure repetition. You learn because you do something over and over and over again. So when we think of AI, we need to make sure that the tools we provide are actually helping children repeat. Repeating different ways deepens their learning, and it’s not doing or replacing some of that repetition. Like you don’t want the AI to do for the child, but the AI can encourage the child to do that and provide more opportunities for learning. Another thing that’s going to be important is AI tends to gamify a lot. That’s what we’ve been– because when the question is, “Do we have AI in the classroom?” We already have a great deal of AI. Any apps that has some gamification uses some level of AI to do that and to personalize it to the child. But what happens is sometimes children will be really interested in the game itself, the reward they get, and it moves them a bit away from the learning, and probably like just the pleasure of learning to learn. And we have to remember children are highly curious. They actually like and they actually enjoy learning, so we need to promote that to them. And finally, one of the issues we have right now in education with the tools that are provided, is that the metrics is engagement. So it’s not how much a child learns. That’s not what’s looked at by the companies. It’s how much time they spend on there. And AI can really promote the learning by complexifying the current interfaces we have. We can also really increase engagement by making them even more attractive. So we really have to think of what children. So that’s why I think it’s important to put all of this in the context of ethical AI. So it’s always kind of the same principle that we see around the AI, but how do we apply that to school? What does it mean? So the first one is, yes, AI should be safe. That goes without saying. But when we think of what is safe here, there’s different– think there’s of course, the content they are going to be exposed to. And there’s also the type of interaction. How do they relate to generative AI? What kind of interaction do we want to see? What is the level of parasocial engagement? Like, is it something that gives them information or that’s become their best friend and confidant? That plays into what we mean by safety. It has to be child-first AI, which means it’s done in their best interest. So it’s done, for instance, if it’s a teaching tool, to really improve learning and not to improve time spent. It has to be accountable and inclusive. The goal is not to have tools that work for 50% of the class. Once we have a tool that really helps average children. And in that way, because the AI can personalize, it can actually be more inclusive of some children that don’t always have the resources needed to really cater to their own personal learning. And we know, for instance, like, text to speech with AI, can be a great assistive technology for some children who struggle with learning disabilities. Then we need to have an AI that’s very transparent. And I will give you a very specific example, is that with those AI systems, sometimes they give you an output, but you don’t really know how they get there. So let’s say you have an AI that is grading a children’s paper. You want to make sure it’s actually grading the paper, the argument, the level of vocabulary, and it’s not inferring things on these children based on the vocabulary we use, for instance, where it can guess the socioeconomic level and then be, “Oh, by probability, this type of children have that grade.” And so it does it based on the profile and no longer about the work. So we need that transparency and that fairness to make sure that those AI do not reproduce bias, stereotype, and actually increase those inequalities. And finally, one that is really important is we talk about data privacy with AI. AI doesn’t only have access to a child’s age, where they come from, but it also has access to how a child thinks, how it interests, what are their preferences, what is convincing to them, what isn’t, both their personality. So we really need to have systems that are very secure. And then the final one that we need to add is really, we need to make sure that all those AIs safeguard their cognitive and affective development. So they provide what is needed at the specific age. So kind of like the calculator for a six-year-old who’s learning rapid math, counting, developing a concept of number, calculator’s not great at that age. They don’t do much. But once they’re in high school and they’re doing more abstract concepts, it’s great because they already how to know– they already are independent. When we think of generative AI, it is the same. Like you shouldn’t start learning how to write with an AI that does it for you, but the AI can be good at prompting. So I think the question comes back on: should we ban AI in the classroom? So the first thing is like, it’s already in the classroom. There’s been AI for a long time, it’s not new. And I think it would really be a missed opportunity. There’s things that we can do with it that we’re not able to do. But to do that well, we need to be really purposeful in what we do on how we use those tools and who helps working on this. Because often what I’ve been seeing in my consulting work with edtech companies is they are very good at creating great apps that run well. The design is great, and we need to make sure that – and I’m almost done – but we need to make sure that they actually consult more with educational experts, with neuroscientists, with psychologists, so that they can actually really deliver on the promises that AI holds. That’s my time up.

[Dr. Mark Warschauer]: Thank you. Thank you so much, Mathilde. Fascinating. So let’s turn it to an audience question. We’ve all seen how cell phones and social media have affected children’s social interaction, and not always in positive ways. How do you anticipate that AI tools and chat bots may change the way kids socialize or don’t socialize? And are there any trends we should be concerned about? 

[Dr. Mathilde Cerioli]: Yes. And I think we’ve all seen examples where social media has really taken a lot of time away for children that spend a lot of time connecting with that. We know that in some cases there can be useful uses. And it’s always about who is using it, how they’re using it. I think we’re going to see the same thing with AI. So when it comes to generative AI apps, I think that’s really one of the concerns right now, things like Character AI, yes, there’s no doubt that they are not developed for children’s benefit. And I think that’s really where we need to think and learn from the mistakes of the past. We know it has an impact. We know it’s reshaping the way children just are in the world. And so right now we can really think, okay, we know it’s impacting them. So how do we develop products that are actually creating solution to existing problem and not creating new problems that we will need later to fix? So that’s really the work we’re also doing with companies, because they’re more aware in a way, than Facebook was when they were creating this app, the fact that it will – real life impact. So they are great ways of thinking of generative AI. Is it like challenging them? Is it pushing them in a way that’s not like trying to be their best friend, to connect, to give them tips on how to be with others at an age where it’s difficult, but should it become their best friends? Certainly not, because that’s when they need to learn to be around others.

[Dr. Mark Warschauer]: Thank you. Thank you so much. That’s a really, really important point for all of us to keep in mind. Okay, I’m going to go on to our next speaker. Amanda Bickerstaffe is the co-founder and CEO of AI for Education, a former high school biology teacher and edtech executive. With over 20 years of experience in the education sector, Amanda has a deep understanding of the challenges and opportunities that I can offer. She’s a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K-12 and higher education. Amanda is committed to helping educators, staff, and students maximize their potential through the ethical and equitable adoption of AI. Amanda?

[Amanda Bickerstaff]: Hi everyone. Well, thank you for having me on, and I appreciate everyone that’s come before and their perspectives. I’m going to be focusing on AI literacy, because we have been an advocate for this really generative AI literacy for almost two years now. So, I’m a former teacher and came to generative AI in a very interesting way. Actually, the first time I ever used ChatGPT was asking it to write a rubric. And when it not only formatted, I wrote a good rubric and formatted the chart. I realized pretty quickly that this was something we’re going to have to really start to understand from foundational levels. And so the first thing I want to talk about with AI literacy is that we do believe at AI for Education that AI literacy is going to be a fundamental 21st century skill. We believe that AI literacy is more akin to digital literacy and media literacy than traditional computational literacy. And we do believe this is for everyone. And what you’ll notice here is it doesn’t say students, it actually says just a fundamental skill for all of us. So whether you’re, you know, teacher or leader, a parent, a student, we believe that AI literacy is going to be really important for you because this is a thing and this is actually like a metaphor. So we’ve been working with schools and districts and higher education institutions across the US and the world for just about two years. And what we saw both in, you know, staff rooms, but also online, is that we are just really thinking about AI impacts– negative impacts as cheating. And so I really appreciate Mathilde and everyone else kind of talking about it goes beyond that. But we created this because we believe that if we just focus on academic or professional integrity, we’re really missing both the opportunities that are available using generative AI and other AI tools, as well as some of the deeper issues that we see as well. So I want to focus a little bit on the positive. To start, I think that we are already seeing, you know, there are some opportunities, and this was talked about in the panel before, about on demand feedback. You know, that students like to do work at 2 a.m., maybe not like it. Maybe it’s just fueled by fear. But, you know, 2 a.m. is a sweet spot for kids and you’re not going to be available as the educator or parent or tutor at that time. So being able to get on-demand feedback and know how to critically evaluate the worth of the feedback is really an opportunity that we’re already seeing. Opportunities for, you know, personalized learning. You know, we have other opportunities around creativity. You know, assistive technology, even productivity. I do think, though, we don’t see as much productivity yet because the technology is still very early, but these things are coming. But then on the other side, and this has also been brought up by the other panelists, is that we do have a much more complex potential impact, of these tools, especially generative AI, that that is going to be something conversational. It’s going to be something that has been said before, creates, you know, it has hallucinations or inaccuracies that are part of how the technology works, actually, that can feel very, you know, very real. And also the bots can be very confident in that, which has been then spoken about. So there are some really big issues around misinformation, but also around bias. You cannot talk about artificial intelligence systems without talking about bias. These tools primarily have been trained on human data. Whether it’s the internet for ChatGPT or every stock photo for Midjourney. And so what happens is these tools, because they are probability machines, they actually can not only reflect biases in the training data sets but also amplify them. And that can be very impactful. That could be you know, deepfakes are happening all over the world. But now more and more in schools, and so being able to A) recognize that we should have a “do no harm” mindset, but also that this is something that could be happening right now within your organization, and it’s something that we really want to focus on as a greater kind of view into the potential harms. And the last thing is actually, Mathilde talked about– a little bit about this with character AI, this idea of artificial intimacy or AI companionship, which we know is starting to impact young people. And there have been cases of, you know, mental health crises and also, unfortunately, suicide by having students that become– or young people that become very isolated through their interactions with artificial companions. So what we think of and we think of the way that– what AI literacy is, is very much about the knowledge, the skills, and the mindsets. And I really want to underline mindsets that enable individuals to use AI in three ways. We want them to be safe. We want them to be ethical, and want them to be effective. And so what you’ll notice there we are really balancing, though, that we do believe that this needs to be a balance of those knowledge and skills, but also we need new mindsets of using technology. And I will say, I think that this is what we’ve missed in the past with other digital literacy and the impact of social media and devices. But I do also want to point out that while it’s safe and ethical, of course keeping people safe, whether it’s you individually, those around you, but also there are– there is opportunity here to really find value in these tools already, even when they aren’t necessarily fit for purpose, especially if it is developmentally appropriate. You know, meaning that like if you’re a college student and you don’t have access to lots of supports, this could be something that already with the right types of background and support, can be something that can really start to help you in your, you know, not only in your like, the college itself, but also thinking about your career. So what we believe very strongly is that foundational knowledge is understanding that AI has been around us for a really long time. We are all Generation AI, but that generative AI, the entirety of generative AI is really in the last decade, and understanding the differences between, you know, the subset in the larger field, the role of training data, which we’ve spoken about in terms of bias, but also in terms of quality of outputs. And then that last thing is why I think if, you know, AI literacy for especially for, those that are developmentally at an age and ready to do this is that the capabilities limitations is by far the most moving target of this because every day the technology changes, maybe gets a little bit better, but also sometimes gets a little bit worse as well. And then what we want to really see though is once we have these foundations, this like learning and opportunity is that you are going to be able as if you are someone that has true AI literacy, would love you to be able to use these tools in safe ways by protecting your data privacy, being able to evaluate the risk of different tools, and also maintain that healthy balance of like, what is human like what makes us special? But also in the sense of like, are we over relying on these technologies, whether in academics, profession or also in, you know, that kind of the social emotional component as well. The second is about ethics, maintaining academic and professional integrity. Like I said, this is for everyone. We love transparency. So if you’re using generative AI as an adult, we’d love you to be able to share that and really normalize it. The awareness that it goes beyond AI. You know there are lots of ethical issues. Pretty much every single gen AI model or tool, like major model, is being sued right now by multiple, you know, artists, writers, you know, newspapers. And understanding that or even, you know, climate is something important and then that “do no harm” mindset. Some things might seem really fun, but they could be pretty harmful, especially around, let’s say, a deepfake. You think you’re making a funny image of your teacher, and then that now gets taken out of context, and that teacher could be in trouble. And that last thing is really thinking about the effective use, that opportunity of centering human originality. What makes us special? In fact, right now, to get the most out of generative AI? Of course, you need to learn how to prompt and interact with the tools, evaluate and refine your outputs. But also like if you’re an expert in something and you lean into the tools as a way to augment you, you’re actually going to see a lot more value than just something that is, you know, you don’t know a lot about. And so there’s a real opportunity here to augment what we love doing. All we care about, as well as starting to offload, those things, especially as we get older, that take away from our flow state and, and you know, we’re not talking about replacing the learning, but once we’ve done the learning, how can we support these tools to actually, allow us to really focus on what we want to focus on what we want to focus on?  And then finally, like a lot of people kind of think there’s only one tool out there, but there are all kinds of tools that can be better at using for different outcomes. And so being able to pick the best fit tool is another way of being pretty effective with these, with using generative AI. So the last thing I’ll say before I wrap up is that we know that students are using generative AI. The research shows that, you know, in fact, ChatGPT has come out and said that students in the US are the largest single demographic using ChatGPT, and I believe very strongly, that 13 to 24, meaning it also includes our late middle and high school. But I just want to end with this piece– it’s that I think that a lot of times we get caught with AI literacy again with like students kind of are going to get one over on us. But I do want to point out that kids also A) are either unsure of what’s appropriate because they haven’t had deliberate AI literacy training and or guidelines of use, or they are pretty sure that there are acceptable uses and then there are unacceptable uses. And so I just wanted to kind of point that out that, like, this is such an opportunity because like AI literacy for all of us is important right now. Every single person in the world, whether and it’s something that can bring us together. And so what we really want to see is that we want students to be able to thrive in this world. But also what we want us as well as adults, teachers, leaders, because this is a real opportunity. And generative AI and AI systems are just going to become more ubiquitous. So this is our opportunity as a society to really spend some time in this space because, you know, it’s going to be incredibly important for us to understand what these tools are and know how to use them when appropriate, but also understand what’s coming next. So thank you. So that’s the end of my presentation.

[Dr. Mark Warschauer]: Thank you very much. I see some really interesting trends going across the different speakers, and also considering some of the additional questions that are coming in from the audience. So I’m going to transition to some general questions for, for the entire panel. I mean, one thing that’s come up a lot is the notion of risks and benefits and, we went through this, to a certain extent, with the rise of the internet 30 years ago. And the internet had all sorts of potential benefits, both in general and for learning. But there were also a lot of threats and dangerous to it. And schools went back and forth between, you know, banning it and over relying on it. And I feel like we’re trying to, we’re in a similar period of trying to feel things out right now. And, AI has even more risks. I mean, you talked about things like deep fakes and there’s very powerful risks. And not everybody is subject to those risks equally, but also not everybody is subject to the potential benefits equally. So how do we weigh the risks and benefits? Should we wait until there is definitive research? Should we, you know, look at how do we weigh risks and benefits in integrating AI into our schools. I’ll throw that out, to whoever wants to speak to that.

[Dr. Adam Dubé]:Thanks, Mark. I’ll do it. When we ask the question, should we wait for the research to determine which uses are effective and not? I think one of the things we also have to look at is research on other educational technologies. Say, for example, digital games. There’s been 30 years of research on digital games. Do they work? Sometimes. And that’s after decades and thousands and hundreds of meta analysis. So that’s one of the areas that I study. And the answer is still unclear. And it all comes down to well, it depends. How well is it designed? Are there educators and educational experts involved in designing these tools? And so I say, you know, are we going to wait for some magic point? Where are we saying, like, yes, we have a definitive answer of how to use these things? That point isn’t going to come. We’re going to continue to evaluate and critique these tools to identify what does work or what doesn’t work. So there isn’t going to be like some point where it’s going to be okay to make this decision. So I think that’s an important mindset. And so what that means is that in the present, we have to be critical about what are the risks that we understand right now of using these tools. Say, for example, what do we know is likely working right now or is likely not working right now. And to empower, say, for example, in education, those people who are, you know, closest to educational practice to make these decisions. So I think a lot of these decisions should come from educators themselves, enabling them to make decisions about the tools in their classrooms. And the risks that we have is that instead, what happens is that a new suite of tools or just, you know, unloaded on the classrooms where, say, for example, a classroom that has Google Classroom, for the entire school, and Google just turns on Google Gemini. And so it’s just available to everybody. And all of a sudden all the teachers just have to adopt it because it’s there. And so I think we need to resist that a little bit and instead enable educators to make decisions based on what they think, based on their expertise as teachers is, will work for their students.

[Amanda Bickerstaff]: Just as a counterpoint, I do think a really fascinating moment right now is that this is the first time in human history where the best technology, the newest technology, is primarily available for free and in some cases without bandwidth. And so I do think that there are some significant opportunities to think about with, you know, every single model maker, because the competition is putting those out. And we do see some really interesting things happening– entrepreneurship and regional and global South like, and I think that that’s where it gets pretty interesting. I think that where the risk comes in is if we just release it, we can like, you know, ChatGPT was released as an experiment and became the fastest growing technology in terms of consumer use ever, right? You know, five weeks to 100 million users. We’re now at 400 million monthly users just with ChatGPT. Snapchat AI, guys, we talked about kids,150 million users use Snapchat AI. And so I do think that what we need to do is like we can’t keep going like we’ve been going where we think that you even understand the risks and benefits without taking time to intentionally talk about what AI and generative AI are, are and are not. And I think that that’s where I get a little bit worried, like it’s the, the, the great thing is, I think that there are starting to be at least state level in the US and also in the EU. These areas of focus, where AI literacy is being, like, understood as a requirement. And I think that that’s where we’re really hopeful. But I don’t think that we can just do this laissez faire, it gets released and it keeps getting better, and we’re just there to like, try to figure it out. Like a teacher that has five preps in hundred and 125 students, how are they going to have time to do that without support time and, you know, and better tools?

[Dr. Mathilde Cerioli]: Yeah. And I can add on to that and really think when we want to see how we can use AI in ways that are beneficial, we’re not always clear among ourselves. And there’s a lot of research being done, but it’s difficult to have a global picture. And that is, part of the work we want to do with the international coalition we’ve launched in February, where we are pushing governments because they’re the ones who are adopting educational strategies. We have, so we have 12 governments in there. We have companies like OpenAI, Anthropic, Google that are really coming there. And we have a lot of, thankfully, nonprofit and NGO, including UNICEF. And so we have a lot of people around that table who are like, “Okay, how do we make sure that we don’t repeat those past mistakes? And that’s when we are developing those tools, we don’t wait for regulations that will take anyway too long to come.” And that we think ahead of, “Okay. If we’re using a tool for a child first, what does it mean that it’s beneficial? How do we make sure it’s actually what we want to see? So that the tools we create, the promises we make, we are able to deliver.” So that honestly, for working with that, there’s a lot of things that they don’t know. So we need to also really centralize the knowledge of how do children trust AI? Yeah. What are the specific we should have about AI in education, what it looks like, the limitation we see right now to really also create a network of knowledge within those companies and within the governments, because they have to regulate on AI and on children, and often people who write the regulation, they’re not experts in AI or in children. So there really is a need to, really bridge that knowledge and have those conversations like today, together, where we bring a lot of different aspects and point of view and orient the work that is going to be done because instead of saying, like, companies are not doing a good job and they shouldn’t do that, we need to say like, this is what a good job looks like, and this is how we do it, and we build with them and we test and we’re like, okay, that didn’t work. How do we improve? And they need to work together as well because otherwise you create unfair competition within the market economy. So if they’re all working on that, it’s much easier to really, like, create industry standards so that then they know how to develop the skills.

[Dr. Mark Warschauer]: Thanks. And let me, I think related to all of this is the role of parents. I mean, my understanding from, from talking to schools is they’ve basically developed a system where schools buy in to the expertise, parents buy in to the expertise of the schools to keep people safe. Parents aren’t deciding on a day-to-day basis whether their child should go on to Google or use another digital tool at school. There’s general digital literacy, but basically, parents sign off, you know, to say, my, my child is, is has the right to use these digital tools in schools and then schools try to make sure that they’re used responsibly and might, you know, inform parents or take away students’ rights if they use them irresponsibly or this or that. Does AI kind of fit in that system kind of a similar way, or is AI so categorically different that, you know, parents really need more veto power about whether and how their children use it? Any thoughts? 

[Amanda Bickerstaff]: I mean, we’re seeing that some districts are putting informed consent and already which is good. So are they already, you know, a large district in the middle small district are required. You know, they’re required to get permission for learning management systems, etc., especially for younger students. So I do think that is a trend. I think there are also some, you know, some districts and schools that are making a huge effort to get parents up to speed as well, like we’ve done our kind of, you know, 101 for parents the same way we do with teachers, a slightly different area of focus. So I do think that that is something that is a really positive trend. And like the National Parents Union is doing a lot of work around the upcoming AI Literacy Day, which is the end of next week. So I do think that that’s something that is a good signal. I think that, you know, parents getting their heads wrapped around this to understand what it is and isn’t for their students. I mean, Alexa is going to be Claude, you know, like Alexa is going to be run by Claude, by Anthropic in the next version. Siri’s behind. They haven’t been able to Apple’s not been able to figure it out. But you know how many little, little kids use Alexa to ask questions? They probably should ask their parents. And then now instead of a stock answer, it could be a soliloquy, right. Or something that’s incorrect. So I do think that, you know, this idea of like parents having the opportunity, but I think that one of the things we always have to think about is generative AI is going to be more akin to electricity and the internet than a device or an application. And so the idea that we can, like, turn it off, the devices in your home, the new laptop or the new app, like iPads or, you know, a new phone will have generative models inside them soon. So I think that, of course, we want parents to feel informed and have that, but also recognize that this is something that even a parent most likely won’t be able to fully protect their students from interacting with pretty quickly. 

[Dr. Judith Danovitch]: I just wanted to add, so, you know, the research going back many decades about television viewing, right? And the concerns people had about that. And one of the pieces of advice is, adults should co-view, right? And you should talk about the things that you see. And this, you know, has persisted through all these different technologies. And I think, again, like Amanda was saying, the challenge here is most adults can’t really explain necessarily how AI works and where it’s embedded. And there’s, there’s even some interesting research, with children and, you know, the fact that even when I was a child, you could troubleshoot computers and technology, right? Like you could do things like unplug it and see, oh, it’s not plugged in. That’s why it’s not working. But a lot of all, you know, the mechanisms are becoming more and more transparent and much more difficult for the average person to understand and explain in a way that, you know, a young child might understand what’s going on, too. And I think that that’s going to be an ongoing challenge.

[Dr. Adam Dubé]: And I think, you know, just the one thing, what can parents do as well, you know, that co-viewing, using these tools, talking about them, with their children at home, but also being involved in their parent-teacher associations and then asking their local schools and, their principals what their policies around AI adoption is and to ask for transparency explanations about why they’re choosing to adopt certain tools. And so and about the cost of these things, because one of the things that I want to contend with a little bit is that these systems may be freely available to a lot of users. These are not free systems. Technology companies aren’t making them available out of the goodness of their heart. For everybody, there are costs, whether it’s using student data to train their system to build out their products, whether it’s, so collecting data on students in that way or to, actually down the line to be able to charge schools, en masse, you know, user fees for these systems make it free now and eventually have it be a source of income. These are things that we have to be aware of. The companies are doing, that’s in the plans. And so that would be money that’s going from the education system out to technology companies. So what can parents do is be involved in local schools, ask the school what their own regulations are, have that have a simpler a clear plan. And then that way you can have a better understanding of what the future is, for your kids.

[Dr. Mark Warschauer]: Thank you, thank you. Let me turn to another question, which for me is probably the most fascinating question, really related to AI in education, which is, you know, I mean, I’ve been involved in educational research with technology, you know, back to the 1990s. And I remember, you know, with the internet, people originally saw computers as a way to sort of help tutor students in the exact same thing they were learning before. And then people realize, you know, you know, computers brought new forms of information literacy and multimodal literacy and new ways of learning and thinking that we had to prepare students for. And AI, I mean, is even more transformative. Certainly AI is going to transform the way people do computer programming, there’s no doubt about that. And it’s already doing that. I think if you look at professional life, AI is transforming the way people write and communicate. And, you know, over time, our notions of what writing is have changed. I mean, they certainly changed with the invention of the printing press, but they’ve also changed, you know, with the development of the internet. We probably pay, you know, less attention to things like spelling and punctuation because we have spelling and punctuation checkers for that. At what point are we teaching students to program computers without AI, or to be able to do it with AI? Are we teaching students to be able to write without AI or with AI? And before you answer that, I know there’s a facile answer, which is those who can do it better without can also do it better with. But I don’t think that that’s 100% persuasive to me. You know, I feel like, there are ways to teach people to be really good writers and computer programmers without AI, maybe without them learning some of the underlying skills. So what really is the goal of education in the AI era? 

[Dr. Mathilde Cerioli]: I would love to bring some aspects to that, which I’m not sure someone who has no basic knowledge about something can do it well with AI. Because as long as it works, it works. But then when it doesn’t work, you have no way of fixing it up, understanding where the issue is. And so we talk a great deal about critical thinking is the new skill. Yeah, but then we need to think, okay, what is critical thinking? What do you need to be able to think critically? And what we know is that you need to be competent and you need to have enough knowledge to be critical of what’s presented to you. So there are things that are not going to change. We still need to be able to express ourselves early to talk to others. And a lot of that is done through learning to write, to make a good argument, make a good case. We need to be able to function like we’re not just writing all the time. And we need to have that common, basic knowledge where we can have conversation around because we all have some set of agreements on how the world functions, and that basic knowledge is also the foundation of all social skills. How do we interact with others? What is the common structure that we have, to relate with others based on the basic agreements we’ve made around knowledge? So thinking that our children don’t need to know all those things is not that true. Sometimes I see things like, oh, well, they don’t need to learn the name of the countries anymore because they can find it on the internet. Yes, but you still need to know there’s a lot of countries to be able to look for it. And there’s really that notion that you don’t know what you don’t know. So when you’re even unaware that there are things to know about a topic, you can’t have a good understanding of what you’re missing. And you can’t have that aspect that is critical. So I think we still need to really think now in education, what are the basic skills that we need as humans to function as a society together, and that shouldn’t change with AI. We know that, for memory tasks, if we tell people that they can store it in a folder in their computer, or that they won’t be able to access the information that’s given, people will remember, not the content, but where it’s stored in that computer. So we put strategies in place as humans, to make it faster, to make it quicker for us. And at the same time, the basic of being humans aren’t changing. So I think we need to push for AI literacy because children need to understand AI, but they also really need to understand what it is to be a human, what are their common skills, how they relate to the world and over so that they can also decide what they need to hold on to and why they need to hold on to it. Like why do they need to learn to write when they could do it with ChatGPT? Well, because you need to be able to speak and express and have all those layers in your emotions so that when you’re relating to others, you have all those nuances in your perceptions of the world. So that was my very long answer but to say, like, I think we need to take a step back basically, on what we want to hold on to. 

[Dr. Mark Warschauer]: Yeah. Adam?

[Dr. Adam Dubé]: Yeah. I think one of the things we can make– what is the purpose of education? You know, not a –just a small question, you know, within that, you know, what it should not be is that the purpose of education shouldn’t be to just prepare our students for the workforce of today. That shouldn’t be the goal, because then that would answer the questions like, what do they need to learn how to code? It’s like, well, we should be teaching them prompt engineering. Well, maybe they don’t need prompt engineering. Maybe it’s. And then there’s this ever changing answer of what skills they need right now. What’s the purpose of education? To enable our children to meaningfully engage with, understand, critique, appreciate and contribute to our society? And then that comes to, okay, what skills do they need to learn when we ask about writing? What’s the purpose of writing? Is it just to impart information to others, or is it to communicate with another human being? And then we can ask, what is the role of things like ChatGPT in producing those two different types of outcomes? If I’m just writing a quick email or trying to get three points out to somebody, maybe there’s roles for these automated systems in that context. But if I’m trying to communicate a message with another person, a family member, somebody at my workplace, and then that’s somewhere I have to be taught how to be able to engage and convey my ideas to another person, to effectively help them understand what I want them to say. And so that helps us make these decisions about what skills we want students to learn and what we want them to practice, and maybe when we should use these tools. 

[Dr. Mark Warschauer]: Fantastic. Let me turn to another, I think really, really important question, which a lot of us are familiar with, Benjamin Bloom coming up with kind of the two sigma problem many many decades ago, the idea that, this was exaggerated, it was never really proven. But the idea that individual tutoring is so powerful that it can be, you know, two standard deviations better than all other types of learning. Now, again, there was some exaggeration there. But we do know from research that, for example, one of the best ways to learn how to write is to have individual coaching sessions with a skilled teacher who can sit down with you, go over your writing, discuss it with you, etc. And the problem is that teachers are not able to give that kind of 24/7 individualized support to students. So people have looked to technology for that for a long time. And technology has never been able to do it that well, especially in ill-defined domains like, like writing and other things. And all of a sudden we have a tool, like generative AI, which can do that kind of tutoring really well. Is it perfect yet? No, but it’s getting better and better. So my question is, what do we gain and what do we lose when we start, using really, really skilled 24/7 tutors, generative AI tutors with children? For example, I mean, I think one thing I can think of is that a skilled human tutor knows when to withdraw their tutoring so that, you know, children have to do more on their own, generative AI, a strength and its weakness is it’s available 24/7. But also a very big element is the social element of learning? That children learn a lot from social environments. So what do you all see as the benefits and threats of very, very good personalized tutoring with generative AI? And how can we maximize the benefits while minimizing the threats? 

[Amanda Bickerstaff]: Just quickly, I think we might be overstating how good they are right now. I mean, you could look at Khanmigo and Khan Academy that went all in where their math tutor was released a day after ChatGPT 4. And essentially they’ve had to walk back what the tool can do. And so I think generative AI as a tutor is actually not a good tutor and probably will never be because of hallucinations and because of sycophancy. When I say sycophancy, like if Adam said, “I don’t know”, a couple of times, ChatGPT will just tell you the answer, which would be more akin to like a homework helper that’s been shown not to be as effective as a high quality tutor or high dosage tutoring. So I do think we are not nearly as close as we think we are. And I think it’s been overblown pretty significantly at this stage. I think that there have been signals that when, Stanford did a study where tutors that were using an AI that helped them remember for online tutoring to slow down, ask better questions, give explanations, and they actually found that the lowest, the lowest rated tutors actually supported students more, which was a very nice piece. So I almost think that between just to kind of blow this out a bit. I think that there’s a step in between of a co-teacher or a co-tutor or like a co-learner for students, because we do know that if you look at visible effect size, that like teachable agents, like kids teaching instead of being taught is also has a very strong impact on learning, especially early learning. But I do think that that’s probably a step in between. And I would almost love us to get out of this idea of the ivory tower of every kid will have an on demand tutor all the time that’s there for them, into something that, like what is actually what we want to do as we learn more about learning science. Because this is a thing I don’t think people talk about all the time, is that our understanding of how the brain works and learning works like and even standard progressions is actually not that great yet. And so the opportunity of artificial intelligence as it advances is we’re going to know more hopefully about learning over the next decade than we have before. But I think there’s incremental progress that if we look at what the actual needs of students are and teachers and, and then what the technology can do, and in this medium term, I think that’s where we can start to really see some, some meaningful change. Instead of putting all our eggs in this basket, that generative AI is going to be able to figure out inaccuracies and all these things. And so I think that that is where I would like to see us going, because I do think it’s possible, but it’s definitely not right now. Seems like it’s even if general intelligence comes out, it’s not going to be that good at and what we need it to be, because pedagogy itself potentially isn’t strong enough to be able to have a model know how to do that.

[Dr. Adam Dubé]: Yeah. And I like to now reiterate some of those ideas, this assumption that these systems are going to be able to be effective tutors, you know, we have to question it. And we can’t, I don’t think we should be putting it out there as a natural end state for these systems, because that’s then using arguments. It’s like, well, we all adopt it now, today’s the worst it’s ever going to be. So it’s going to get better. And that recommends certain types of uses and practices that we maybe want to avoid. So instead really looking at how these things exist now, and so, and how well these companies are able to understand the practice of teaching. Say example, last May, Google put out a paper where they said they analyzed all the learning theories in existence, and they created one particular mathematical model for optimal teaching. It was ludicrous that you could actually do that. And so I think we need to consider that we have to have a better understanding of what makes good tutoring, that it’s hard to build that into these types of systems. And it makes me think back. You quoted Socrates and Plato, Mark, you know, if you go back to the not so recent, not so far history of Skinner, he talked about the science of learning, but the art of teaching. So even behaviorists like Skinner thought that there was a nuance and a practice to teaching and working with students, a human activity that’s really important, that’s hard to incorporate into these systems. And so I just, you know, just to reiterate a little bit what Amanda said there. 

[Dr. Mark Warschauer]: Thank you all. I think we need to, we can talk about these for hours, but I think we need to wrap up. So let me just make one final comment. It seems like, you know, throughout history, we’ve had a lot of extremes of thinking that a technology is going to destroy education or it’s going to be the silver bullet that we need to depend on to transform and save education. And what I hear from this panel is a healthy, critical, reflective, balanced approach. And that we’re probably going to be steered in the wrong direction if we try to go either extreme. But we learn from what we know about television, what we know about books, what we know about social media, what we know about anything which is it depends on how it’s used. And so there’s probably not going to be any way too short, any shortcut to continuing to think critically about these things, reflect on them, like we’ve done in this panel today. And I think this panel has been very helpful for my journey on that. And I thank you all for sharing your thoughts. And I’d like to turn it back to Kris for her final thoughts.

[Kris Perry]: Thank you, Mark, and the entire panel for this thoughtful discussion, providing the knowledge and strategies parents and educators need to navigate AI in the classroom with confidence. If you found today’s webinar insightful, help us keep the conversation going. Your donation supports Ask the Experts webinars. Just scan the QR code on the screen, or visit childrenandscreens.org to give. Thank you.