Rapid advances in artificial intelligence systems continue to be deployed rapidly for use in commonly available tools online. What are the effects of today’s AI technologies on children’s development? Will accessible AI tools erode children’s critical thinking skills? Will chatbots disrupt children’s ability to socialize properly? On this episode of Screen Deep, host Kris Perry explores these timely questions with Dr. Ying Xu, Assistant Professor of AI in Learning and Education at Harvard Graduate School of Education. Dr. Xu draws on her research and emerging insights from the field in a nuanced discussion of how children currently think about AI technologies, and the potential risks and benefits of AI for children’s cognitive and social development. She provides suggestions for the ethical development and implementation of AI, with an emphasis on including children in the design process.

About Ying Xu

Ying Xu is an Assistant Professor of AI in Learning and Education at Harvard Graduate School of Education. Her research focuses on designing and evaluating AI technologies that promote language and literacy development, STEM learning, and wellbeing for children and families.

 

In this episode, you’ll learn:

  1. How children are interacting with generative AI and other new AI tools.
  2. What the latest research says about AI’s impacts on children’s social development.
  3. Where AI can support children’s learning – and where it risks “outsourcing” independent thinking and critical problem-solving skills.
  4. How to tell whether an AI product is appropriate for a child at a specific age.
  5. What AI developers could do to make AI tools safer and developmentally-appropriate for young users.
  6. Why “co-learning” with your children is essential as AI tools continue to evolve.

Studies mentioned in this episode, in order mentioned: 

Xu, Y., Thomas, T., Yu, C., Pan, E. Z. (2025). What makes children perceive or not perceive minds in generative AI? Computers and Human Behavior: Artificial Humans, 4. https://doi.org/10.1016/j.chbah.2025.100135

 Reeves, B., & Nass, C. I. (1996). The media equation : how people treat computers, television, and new media like real people and places. Cambridge University Press.

Chein, J. M., Martinez, S. A., & Barone, A. R. (2024). Human intelligence can safeguard against artificial intelligence: Individual differences in the discernment of human from AI texts. Scientific Reports, 14. https://doi.org/10.1038/s41598-024-76218-y​

Xu, Y., Aubele, J., Vigil, V., Bustamante, A. S., Kim, Y. S., & Warschauer, M. (2022). Dialogue with a conversational agent promotes children’s story comprehension via enhancing engagement. Child Development, 93(2), e149–e167. https://doi.org/10.1111/cdev.13708

Xu, Y., He, K., Levine, J., Ritchie, D., Pan, Z., Bustamante, A., & Warschauer, M. (2024). Artificial intelligence enhances children’s science learning from television shows.Journal of Educational Psychology, 116(7), 1071–1092. https://doi.org/10.1037/edu0000889

Xu, Y., Prado, Y., Severson, R.L., Lovato, S., Cassell, J. (2025). Growing Up with Artificial Intelligence: Implications for Child Development. In: Christakis, D.A., Hale, L. (eds) Handbook of Children and Screens. Springer, Cham. https://doi.org/10.1007/978-3-031-69362-5_83

[Kris Perry]: Hello, I’m Kris Perry, Executive Director of Children and Screens and your host of the Screen Deep podcast, where we go on deep dives with experts in the field to decode young brains and behavior in a digital world. 

AI technology and power has advanced rapidly, transforming our world at a breakneck pace. This has raised big new ethical questions, especially when it comes to its impact on children’s lives. How are children interacting with and learning from AI? And how can we ensure these powerful technologies support, rather than harm, their development? 

To help us navigate these pressing issues, today we are joined by Dr. Ying Xu, Assistant Professor of AI and Learning and Education at the Harvard Graduate School of Education. She’s here to break down how AI is shaping children’s learning experiences and what ethical considerations parents and educators need to keep in mind. I’m eager to dive into this conversation. Welcome, Ying.

[Dr. Ying Xu]: Thank you, Kris. Thank you for having me.

[Kris Perry]:  There have been so many rapid developments in AI in the past few years to the point that I expect there has been at least one breaking news story between when we’re recording this podcast and when we release it. To ground us in the latest AI terminology and to help our listeners understand precisely what we’re talking about, let’s drill into what we mean when we refer to AI. Can you take us quickly through the differences between Generative AI models, tools, and software such as ChatGPT that support those models and the machine learning algorithms underlying social platforms?

[Dr. Ying Xu]: Yeah, thank you. This is a very important question to start with. Yeah, so AI is actually much broader than well-known tools in the public sphere like ChatGPT, right? So it includes a wide range of technologies such as computer vision that can help computer recognize images, classifications that can help computer classify dogs versus cats, and robotics, natural language processing and things are all those AI technologies. But many AI systems operate actually in the background. So if you think about YouTube’s recommendation algorithm or facial recognition in smart home devices, meaning that children may interact with this AI without even realizing it. 

But my research, along with much in the public discussion, focuses on the kind of AI agents that engages children directly, particularly conversational AI capable of engaging in natural humanlike interactions and dialogues. So you could think about those older conversational AI technologies like Siri and Alexa, and also including the newer Gen AI technologies like ChatGPT and other probably social robot companions that the students could directly play with and learn from.

[Kris Perry]: I know you’ve focused the majority of your work on the use of AI in learning in young children. How are young children interacting with the types of AI you just discussed, and what have you learned about how AI is affecting children’s learning and cognitive development?

[Dr. Ying Xu]: So at a very high level, we see kids’ interactions with AI agents kind of mirror the way they interact with people. They ask questions, look for specific facts, and seek explanations, and sometimes ask for help with their homework. So this behavior is just coming from preschoolers right through adolescents. But besides this task-oriented interactions, we have also noticed kids sometimes use AI as a companion or emotional disclosure, especially when they get older. 

So I think that the reason why I care about conversational AI is exactly the question you asked. It matters for children’s social and cognitive development, because we understand that conversation plays such a huge role in shaping how children grow, acquire knowledge, and also build relationships. So the key question is whether AI conversational partners could offer the additional learning experiences, or if they might actually undermine  the growth opportunities that stem from interpersonal interactions. So we actually don’t have definitive answers yet, but we have definitely seen evidence supporting both possibilities.

[Kris Perry]: Exactly how do children learn from and interact with AI conversational systems, as you just described, compared to human adults, and how is this changing as those platforms evolve?

[Dr. Ying Xu]: So the quick answer is that when AI is designed to teach specific knowledge, it does a pretty good job teaching kids. So kids could learn just well from AI as they do from humans in many cases–if you think about dialogic reading, where an adult reading a storybook with a child and ask comprehension questions and provide feedback. So AI could be designed to do a similar task. In my own study, we have developed this kind of AI reading companion who narrates the storybook and asks children questions and provide feedback. We have found that this kind of AI companion improved children’s comprehension to a similar degree as an adult companion. 

But we have also found that even though the learning outcomes could be comparable, across multiple studies, when kids interact with AI, they tend to actually put less effort in answering the questions and engaging in the conversation, especially in more challenging areas that require some kind of back-and-forth discussions. And we think that there are two main reasons that are driving these disparities. 

The first thing is AI itself, at least as of now, AI does not fully capture the depth of human conversations. The way it responds, its tone, its voice, the responsiveness, and especially the lack of nonverbal expressions–it just isn’t the same as talking to a person, which makes children respond differently. So that said, this technology is improving fast. So we expect that this gap would actually likely to shrink over time. 

So the second reason why kids responded differently is how kids see AI, their perception. So even if AI could perfectly mimic human conversations, just knowing that they’re interacting with an AI but not an actual person would actually create differences in a child’s perception. So, for example, a child might try harder with parents or teachers because they want to impress them. On the flip side, they may also feel safer sharing unfinished ideas with AI because it doesn’t judge them the way a person might. So the second factor comes from our natural instinct to interact with other human beings. And I don’t think it is something that will be changed over time, but over generations, as AI become a bigger part of our everyday life, the way we approach this kind of social interactions might evolve.

[Kris Perry]:  Well, when I was reading your work and thinking about what I wanted to ask you, this very important finding around effort, even the more nuanced point around judgment, doesn’t even get as deep as what we know about how children learn and thrive in connection with other humans. And some of it is actually tactile. It goes even beyond the words being used and whether the child knows the person’s real or not real, but the Bowlby wire cage experiments are what come to mind for me that there’s a missing human element that goes beyond the verbal, and I wonder if you’ve thought a little bit about that other layer of interaction between the child and either the platform or the adult.

[Dr. Ying Xu]: I think that a lot of time we kind of are concerned that children anthropomorphize AI. When we introduce AI agents, the number one concern is, “What if children become more attached to the AI agent and they don’t want to interact with human beings anymore?” So I will have to say that, from my own study, we haven’t seen much evidence supporting that. Children, even as young as four, they recognize that they’re interacting with a machine which is different from their interactions with people. And we do see that a lot of this kind of social-oriented element in their interactions. So, for example, when kids ask AI, “How old are you? Do you go to school? And what is your favorite breakfast?” As if they were treating AI as a person, a lot of the time we use this as an indicator that children believing that they are talking with a person. But it is actually might not be the case because, if you ask children why they ask those questions, a lot of time those questions are driven by children’s playfulness. They wanted to test the limits of AI because they do not believe that those are the questions that AI or a machine would be able to provide a sensible answers. So this kind-of curiosity driven exploration, I think actually to some extent suggests that children could differentiate AI between human beings. So I wouldn’t worry too much about this kind of confusion. 

But on the other hand, we also see that this kind of social behaviors children had when they engage with AI, which could be actually a natural response as we engage in something that is highly humanlike. So there has been research many decades ago looking at adults interacting with computers and very smart adults and they fully recognize that they’re interacting with a machine but yet they still have a lot of these social-oriented behaviors. So that is actually a reminder for us that, because AI is so human-like, even children have the awareness that it is different from human, but they still might actually engage with AI in ways that resemble their interactions with human in a lot of different ways. So the question for us, for researchers, is really to tease out what kind of these social interactions are desirable that could promote learning and growth, and what kind of this kind-of social interactions are kind of undermining their interpersonal interactions?

[Kris Perry]: Well, given how sophisticated children are, even at young ages, around whether it’s in a human or a machine, is there an ideal age or stage of development where AI is more appropriate to introduce from a cognitive perspective and maybe even an emotional perspective, and what are the age differences in terms of how children respond to AI?

[Dr. Ying Xu]: Yeah, so this is a tough question to recommend an age limit. I know that many popular AI products like ChatGPT have age restrictions, usually limited for adults. However, those rules are mostly driven by legal and liability concerns rather than an assessment of whether the AI itself is appropriate for kids. I don’t think there is a one-size-fits-all age limit. 

Instead, when we think about the age limit, we need to think about three key factors. The first one is the AI product itself, whether it’s designed to do and what kind of interactions it supports. And second is the child, things like their language skills, cognitive abilities, prior knowledge and experiences with AI. And third, the social context surrounding children’s AI usage–for example, whether there is an engaging adult helping to guide their usage. So, for example, a toddler might safely interact with an AI storytelling bot if their parents are there to guide the experience. In the meantime, a high school student with little experience using AI might still run into risks if they are interacting with a social AI chat bot on their own. 

So there are several practical steps that we could take to decide whether an AI product is appropriate for a child at a specific age. I think it is a shared responsibility–both the developers and consumers need to do their homework. For developers, conducting expert interviews and thorough play testing with kids of different ages can help them understand engagement patterns and learning outcomes. This would allow them to recommend a  broad but informed age range for the product. And for consumers, especially parents, caregivers, and educators, it is important for them to be observant when introducing any AI product, taking smaller steps and monitor how the child interacts with it and be ready to make quick adjustments if needed.

[Kris Perry]: How can AI be used to actually bolster, rather than inhibit, children’s cognitive capacities?

[Dr. Ying Xu]: I don’t think we have really concrete evidence suggesting if using AI would hinder or support children’s cognitive capacities yet. Most of our research around AI and children’s learning has been focused on this kind of short-term learning outcomes. So, for example, when an AI companion teaches a child a specific lesson and kids are able to learn that lesson pretty well. But we don’t know how this kind of learning of specific lessons, concepts, and skills would actually be translated into broader cognitive abilities. 

But there are, I think, several speculations. So, for example, we might expect that, when children ask AI questions, it is a way to improve their ability to ask good questions that are answerable by AI and presumably by humans as well. Question-asking is an important skill that we need to develop to acquire knowledge. And if AI could actually lead children to become a better question-asker–so this could actually have a long-term impact in their cognitive development. 

Second is related to cognitive critical thinking skills. Because AI is sometimes providing misinformation and disinformation, that requires children to constantly leverage their critical thinking, to evaluate and interrogate the quality of the questions. This, to some extent, would be a training opportunities for children to develop this kind of awareness and strategies when they encounter information either from AI or from human informants, in the future. So, we know that humans sometimes provide inaccurate information as well. So I think this skill would actually benefit them not just in their interactions with AI, but also in their acquiring and evaluating information in the long run.

[Kris Perry]: You mentioned some of the effects on social skills with children and AI. Are interactions with AI impacting how they relate socially with other human beings?

[Dr. Ying Xu]: I’ll have to say that we, again, don’t have definitive answers yet. I think the first question to your point is really whether talking with AI changed the way children talk to other people. It is reasonable to expect that, if children talk to AI frequently–and because AI isn’t necessarily confined by our general social etiquette–people might worry that kids will adopt this communication behaviors in their interactions with other people, which could be seen as rude or inappropriate, right? So we don’t say, “Hey, someone,” all the time. We say, “Excuse me,” instead of, “Hey.” And we know that children learn vocabulary from AI and can pick up linguistic routines from videos and games. So I think this concern is definitely valid, but we don’t know yet if AI shape their interactions in a similar way. 

So the second question is whether relying on AI as a social partner could replace children’s time spent interacting with real people. Research on social media and gaming suggests that technology can sometimes displace social interaction. So there is definitely the possibility of children’s interacting with AI that might have the same side effect. But on the positive side, AI could also potentially strengthen social interactions. There have been efforts developing AI tools to help children, including those on the autism-spectrum, to improve communication skills with AI companions. We have also seen AI specifically designed as a third-party facilitator to support children’s collaborative learning. So I think those are the possibilities and promises AI could actually be used to enhance, rather than undermining, social interactions.

[Kris Perry]: There are lots of child-facing AI products like Pinwheel GPT, Character AI, and Khanmigo that are meant to mimic humans. How can we help train children in basic AI literacy when even adults increasingly cannot tell the difference with newer AI systems and humans?

[Dr. Ying Xu]: So this is a very interesting question. I actually have done some investigation on that. So actually, not just kids, sometimes even adults struggle to tell whether something was generated by AI or by a human. So there have been several studies on this. They found that adults often perform no better than chance in differentiating between AI versus human-generated content. And there our own research on this kind of speech-based interactions between a child and AI, we found that children also had a hard time differentiating between AI and human responses if we did not tell them ahead of time. 

I think beyond just the ability to tell the difference, both adults and children tend to rely on somewhat inaccurate heuristics, which is some form of mental shortcuts that could lead to wrong assumptions about AI. So, for example, in our studies, we found that children often judged whether something is coming from AI based on if AI uses some sort of social-like language, like small-talk. Asking, “How are you?” or expressing appreciation, like, “Thank you for asking me this question.” But those are the things AI can be trained to do, which make them unreliable markers for children. 

To answer your question, I think it’s important to help children develop an awareness that AI can be very human-like. But at the very same time, AI developers also have the responsibility to promote transparency and in their design so that children understand the nature of the partner that they’re interacting with. We see a lot of AI products kind of label or frame the product as your friend and your companion that will be always listening and be with you. I think this is kind of tricky because it confuses children from the product perspective that they actually don’t get a clear signal who they’re interacting with. Instead, I would actually suggest product developers to clearly label, like, “This is your AI companion, this is an AI application, that could answer your question.” And we could use those introductory-phase as a way, an opportunity to introduce students some inner working of AI to equip them some knowledge.

[Kris Perry]: I asked the question about kids and I also said because it’s hard for adults, because I find it difficult, already, as rudimentary as these products are at this stage, to differentiate sometimes between what’s AI, what’s a bot, what’s a person. I think we’ve all had this experience of trying to interact, say, with our bank or our healthcare provider and is it a person or is it a bot? And I really appreciate you saying that you and other researchers are still trying to unpack this and understand it better and interpret behavior in a way that allows us to have a better sense of what to do next and we really are, it feels, like I said at the beginning, that this is happening so fast and we’re all doing our very best to process what’s real and not real, how to use artificial intelligence as a benefit, rather than a liability, is–it feels like an awful lot of responsibility. And AI tools seem to be getting more developed. There are more of them. The ones that have been out longer are getting, they’re on their 10th version already. So how are the newer AI tools being used specifically in schools and classrooms, not only at home with parents and caregivers, but now teachers and childcare workers are using these tools?

[Dr. Ying Xu]: I think a lot of people–it’s actually getting more complicated when we think about a school environment because for a lot of AI products, they are designed to engage in this end-to-end, one-on-one interactions with kids. And if we think about an environment–so it’s a group setting where multiple kids might be engaging with the same AI agents. And what kind of interactions that this AI agent could promote would be a huge question. And also when we think about school environment, so there are different stakeholders that are involved, like their peers, teachers, and substitute teachers, and how are those education stakeholders play a role in this kind of interactions and this process. 

I think a lot of people also have questions whether it is ethical and safe to introduce AI in a classroom environment and school environment. So I actually believe that schools and classrooms are relatively safer environments for students to start exploring AI. It is because, presumably, AI applications that make it to schools are vetted through procedures implemented by school districts, which, to some extent, ensure that they meet certain standards. Additionally, students’ usage of AI at school is relatively more guided than their free explorations at home, and teachers might be looking at it, and teachers aid might be looking at it, so this extra guidance from teachers and other adults would likely enhance the safety. 

On the other hand, another question is in what ways using AI in classrooms would impact student learning. So it also raises the issue of how much classroom instruction time AI usage might take up, which potentially would be a drawback. So far, study results are quite mixed. So some studies, particularly in higher education settings, have found that when students use AI learning companion, they tend to learn better than when just listening to the lectures alone. So we might expect the similar benefits for younger students using AI as a complementary tool, whether to support this higher student ratio or to provide the additional learning opportunities during students’ downtime, such as waiting for teachers to come around to help them when they get stuck. So this could also make learning more effective in classrooms. 

On the other hand–so, we also have studies found that when an AI chatbot is readily available in a classroom, some students tend to outsource their thinking rather than engaging in this kind of productive struggle that is known to be beneficial for learning. So this key consideration when kind of thinking whether we should introduce AI into schools is how to provide the necessary scaffolding while still maintaining and protecting students’ time for independent thinking. I think that teachers have actually a lot of room for decision-making in terms of how to implement AI tools.

[Kris Perry]: Yeah, I mean, the combination of the earlier point around the caregiver parent versus the machine trying to help the child with a book or with reading and how they’re pretty close in quality, but then there are these intangibles that feel harder to measure and harder to understand, but seem like a big part of how humans learn. 

And then you just made this other point that’s not a trivial point, that outsourcing thinking, having to struggle, having to sort of try and fail and try and fail is actually a skill in and of itself. That grit and resilience that’s also back to kind of a social emotional skill is an area of worry for me because I think, sure, knowledge can be imparted and absorbed, but can some of the muscle around learning and being analytical be transferred from a machine to a human? So, how can we scaffold children in this new AI environment so that they do develop those critical thinking skills?

[Dr. Ying Xu]: I think the critical thinking skill isn’t confined in the domain of AI. So children develop critical thinking skill when they engage in a variety of tasks. So there has been the research on, for example, children’s selective trust in developmental psychology, where children need to learn who are the trustworthy information sources from even humans. So there have been also studies done in social media and internet search where children need to develop this kind of critical thinking skill when they encounter this mixed array of information with varying quality. But I do think that AI actually provides a very interesting context for us to think about how we could support children’s development of critical thinking skills. And in my own research and as long as research from other researchers, and we think about using AI workshops or AI literacy programs as a way to increase students’ awareness of the fact that information could be fallible. So, it might not always be accurate. 

But we also feel that just having this awareness might not be sufficient because a lot of those AI workshops and programs happen outside of children’s interactions with AI platforms, and children might not be able to effectively translate their awareness into their accurate evaluation of AI-provided information. So, for example, if you encounter a piece of information provided by ChatGPT, even that you know that this information could be wrong, if you don’t have the sufficient background knowledge to help you actually identify exactly how it is wrong, it might not actually sufficiently support you to leverage this piece of information effectively. 

And I think as educators and researchers, another area that we need to explore is how we could actually support children to develop the necessary skills to actually interrogate and evaluate information when they use AI. For example, to ask follow-up questions and ask AI to triangulate and provide responses to a parallel question so that they could use the strategy to compare if AI’s responses to the parallel question is consistent with the responses to the previous question. Or we could encourage students to compare AI’s responses to what they have known or what the teachers have told them, and just to make sure that the information is triangulated by another source, so that to increase the trustworthiness. And to sum up, I think these kind of cognitive strategies are important for children to equip, so that they could leverage those cognitive strategies when they encounter information in real time.

[Kris Perry]: I actually appreciate the triangulation example. That’s a really tangible example of how you can scaffold the skills in a child to be able to work with an AI tool in a way that improves its accuracy, that was really helpful. 

And you’ve also been helpful to the Institute. You recently contributed a chapter in our recently published Handbook of Children and Screens, specifically about the ethics of AI use in children. Does deploying interactive artificial intelligence systems to children increase the risk they will be exposed to harmful content?

[Dr. Ying Xu]: So if you ask for an absolute possibility, then yes. So if we provide access, then of course it will increase the risk, but I also don’t think AI system would necessarily introduce harmful content, because there are actually many different ways that we could control AI system to make it more predictable. So, for example, instead of using Generative AI that responds to children on the fly, a more controlled approach would be to design structured interaction flows like branching trees, where the AI is simply determining which branch to follow rather than generating new content. So we use this approach in our collaboration with PBS Kids to build interactive videos where children could respond to characters’ questions and received pre-designed responses from the character. So since all the content was created by the research team and by the producers rather than generated by AI, there was actually no risk of harmful materials slipping through. So I think the key is to consider the spectrum of just how much control we want to put on AI and decide what aspect of AI that needs to be constrained to ensure safe and effective interactions.

[Kris Perry]: So it sounds like the key to safety, at least for young children, is limiting the data sets, making sure they’re using the right data sets for their AI interaction.

[Dr. Ying Xu]: Yes, so data set is one aspect, but what I was trying to say is we could actually limit the types of AI technologies that we implement in child-facing products. And AI not only includes Generative AI, it also includes the kind of AI that–for example, automatic speech recognition that could translate children’s speech into text and recognize what the child is trying to say, and then classification if we have a dialogue tree that we have built ahead of time. And then we could use this recognition and classification to determine what category a child’s response falls into. And then we provide the kind of feedback developed by researchers and educators correspond to different categories of children’s responses. So in this way, we kind of allow this interactivity that the children would receive tailored feedback based on their responses, but we also limit AI’s possibilities of generating contents on the fly. 

I think–so, it’s a whole spectrum of just how much control we want to put in this process. I think that the challenge for developers and educators and caregivers is really finding the right balance within the spectrum, just how controlled or how open-ended we want the systems to be.

[Kris Perry]: You and others have mentioned child-centered design when it comes to ethical deployment of media for children. Can you explain how child-centered design might apply to the production of large language models?

[Dr. Ying Xu]: So this is a complicated question. I think the starting point is to think about what we mean by child-centered design when we talk about child-centered design. I believe that it includes not only safety, which we often discuss, but also how we can maximize the opportunities for children to engage and to learn. So a child-centered design fundamentally involves finding the right balance between mitigating risks and maximizing benefits. I think the latter is often overshadowed by safety concerns in many of the discussions. So when you ask how we could actually implement this kind of child-centered design in the production process, I think there are different considerations. 

The first one is how we could involve education stakeholders, and even kids, in the design process. I know a lot of standard process for this kind of product development is to involve kids in the play test to figure out if a product is suitable for kids or not and identify common, for example, stumbling points for the interaction so that the developers could improve. But something that not a lot of people have thought about is who are the kids who kind of got invited to engage in this kind of play test? If a product is aimed to actually reach a broad range of children from diverse backgrounds, it is actually very important for us to include more kids from different backgrounds to engage in this kind of play test session so that the developers could understand how their products could actually meet the needs of different children. 

And the second thing is just how much and to what extent and how many these kind of playtest sessions that we need to engage. I think one difference between large language models and this kind of previous rule-based interactive apps is that, because–large language models, we develop those models and products mostly through prompt engineering, so we kind of use prompts to instruct how the models should perform or should behave. And there is no predictive way how the model would produce the content or outcome. So that actually requires actually more extensive play testing so that we could actually capture a wide range of model reactions to different children’s interactions and input so that we could catch the edge cases. On the other hand, if you think about more traditional interactive app design, as long as we make sure that there’s no bug or no glitches in the system, we could always assume that the system perform in a predictable way, but it’s not the case in large language models. 

And another thing that I wanted to mention is just how fast those models evolve. And there are actually a lot of studies looking at the evolution of model behaviors. So even for the same model, it behaves kind of differently over time. And so I think that that is a reminder for developer, is that this kind of testing is not a one-off effort. So it’s really a continuous and iterative process that needs to actually continue to be continued even the product is released so that we are up to date on the model performance.

[Kris Perry]: GenAI-based tools are improving and gaining new functions every day at such an incredible rate of speed. What are the primary risks to children from this evolution and from upcoming technologies like improved AI agents?

[Dr. Ying Xu]: I think the big question is, yeah, so AI technologies are changing every day, but we’re not sure how much those changes are incremental versus making it qualitatively different from the previous version of AI technologies. So I think for children, if you think about from a research perspective, we kind of usually assume that one day, AI technologies would actually reach a level that is very close to human performance. So that’s why we actually engage in a lot of research where we have a human adult or human experimenter pretending to be AI and engage children in this kind of interactions. And what we are trying to get at is that, assuming AI has been advanced in this state that it could perfectly simulate humans, what are the implications for children?

[Kris Perry]: What is the most urgent area of research you think needs to be done vis-à-vis AI and children, and does the speed of AI development make it hard to research effectively?

[Dr. Ying Xu]: I think that for now, we have a lot of very broad and opinionated questions and debates. I do think it’s very important for us to ground this discussion in a more concrete manner, which is helpful if we have more evidence to support our discussions and debates. 

I think the first area that we really need more research is just to understand the phenomenon, how students and children are engaging with AI, what specific tasks they use AI for, and under what circumstances. This also includes exploring questions such as what motivates students to use AI and what goals they seek to achieve. And this research could provide a baseline description of how AI is integrated into students’ learning experiences. 

So once we understand this phenomena, I think the second layer of research needs to focus on the impact of AI on students’ engagement and learning. So the key question is whether AI improves learning or developmental outcomes compared to traditional methods, like talking with a parent, or watching television, or playing a mobile app, or other educational resources like reading an encyclopedia. To answer this, we need a lot of comparison studies where one group of student uses AI while the other group of student learns with another format or using another resources. We can then determine whether AI provides this kind of measurable benefits and in what ways. We should also focus on the potential negative consequences such as overtrust or reliance. 

And then the third layer is that we need to focus on how we can actually maximize the benefits AI can bring, including how different ways AI should be designed or implemented. So, for example, questions arise about whether AI code-switches based on children’s home language or engaging them in multimodal interactions can actually better support children’s learning and engagement. It also involves consideration of instructional activities that complement AI usage. So, for example, at what stage a teacher should introduce an AI companion into the classroom to support students finishing their worksheet or reinforcing some of the learning that they had from the lectures? So this line of research would have direct implications on both design and practice. 

Going back to the second part of your question, like AI is changing so fast, and how could we kind of catch up and keep up the pace? So I actually did have somewhat this kind of crisis moment as an AI education researcher. We had actually a lot of instances where we just finished developing an AI system and then the next version of AI models were released, so we actually had to update our AI system so that the models are up to date. But I think that the key to address this challenge is to actually ask the right question. So what are some of the fundamental questions surrounding students’ interactions with AI and machines that are actually unchanged, that are less dependent upon this kind of specifically technological features? So, for example, if we assume that AI is different fundamentally from humans in its ability to have this genuine empathy to students’ lived experiences, maybe we could hypothesize that if we build an AI, it will actually never achieve the level of empathy and genuine mutual understanding. And then in this way, no matter how fast technology developed, this was the whole truth. 

So that’s why we kind of need to really think deep. What are the fundamental differences between AI and humans and other learning resources? And then we design our study and ask questions that address this kind of fundamental differences.

[Kris Perry]: What’s been the most surprising finding in your research so far about AI in children, and something you really just did not expect and you were just surprised by?

[Dr. Ying Xu]: So we have a lot of surprises in our research. So this one, I’m not sure if we didn’t expect, but I think that it’s just very different from the prevailing narrative about suggesting children to view AI as fallible tools. And our research found that from a child’s perspective, AI may be seen not just as a tool, but also as a social partner. So children turn to AI for therapy, and turn to AI for emotional disclosure, and turn to AI to have this kind of somewhat closer interpersonal conversations, especially for older kids and adolescents. I think this could have significant implication if children see AI as social partners. On the one hand, it might actually encourage children’s engagement and model learning strategies. But on the other hand, it could lead children to over-trust AI, as if it came from a trusted human source. 

It could also make them vulnerable to manipulation by advertisements or persuasion. So this prompted me to think about–again, going back to your earlier question–how we could actually implement strategies to support children’s development of AI literacy. And I think that we should support children in developing both skills and mindset to approach and position AI in ways that benefit them. So this is why I started this project, not only to provide some sort of direct instructions like teaching children the strengths and limitations of AI,  but also embed reflection prompts during their interactions with AI, encouraging them to actually pause and think critically. I hope that by equipping children with knowledge about AI and providing ongoing support, they could navigate their interaction with greater agency and make intentional choices about the role they want AI to play in their learning.

[Kris Perry]: Is there any one thing you think parents and educators should know at this point in time that would help keep kids healthy and safe in our increasingly AI-driven online and offline worlds?

[Dr. Ying Xu]: A lot of time, when we talk about adult roles in mediating children’s technology usage, we think of them as supervisors. And, of course, this kind of adult guidance is still important even in the AI era, but I think it is also important for adults to position themselves as co-learners as well. Because AI is evolving so quickly and even adults, parents, teachers, they might not have actually more expertise and experiences than their kids. So instead of just supervising, adults actually need to embrace more open-minded, transparent conversations and be willing to actually navigate the space alongside their kids. So learning through this trial, errors, and adjustments.

[Kris Perry]: Thank you, Ying, for taking us through these murky ethical questions surrounding the use of AI, children, and learning. Clearly, this topic is not going away anytime soon. And thank you to our listeners for tuning in. For a transcript of this episode, visit childrenandscreens.org, where you can also find a wealth of resources on parenting, child development, and healthy digital media use. Until next time, keep exploring and learning with us.

Want more Screen Deep?

Explore our podcast archive for more in-depth conversations with leading voices in child development and digital media. Start listening now.