The introduction of free, publicly available generative artificial intelligence technologies (aka Gen AI) like ChatGPT has spurred experimentation and use of these powerful tools by curious users of all ages. While many have found generative AI tools useful for certain tasks and as productivity time-savers, there is significant concern about the implications of children using generative AI during critical periods of social learning and skill development, as well as about deployment of these technologies without a strong body of research into their potential risks.
Children and Screens convened a panel of researchers, child development specialists and policymakers for a webinar on the emergence of generative AI technology and the risks and opportunities it poses for children and families.
What Is New About Generative AI?
Naomi Baron, PhD, Professor Emerita of Linguistics, American University, explains that the major leap in the last few years has been the emergence of a programming scheme called generative pretrained transformers (GPT), coupled with huge sets of data (large language models). Those massive collections of texts are used to predict what the next word in a piece of writing should likely be. While this approach was developed for producing (“generating”) new written language, this same principle underlies generating images or computer code. Generative AI existed before the release of ChatGPT in November 2022, but relatively few people were aware of it. The difference with ChatGPT was that it went viral, and within weeks, students began experimenting. Since then, educators and parents have been grappling with how to respond. Does use of generative AI lead children to cheat on school essays? To become more creative? As the number of generative AI tools multiples, and as the tools become increasingly sophisticated, the challenges only increase.
How Are Children Using AI?
In what ways are youth using AI technologies? “Researchers have been observing children in their homes and schools and have found that children primarily interact with AI in two different ways. One is to ask specific fact-based questions to gather information. The other one is less common, but children sometimes engage in personal or social-oriented conversations with AI,” says Ying Xu, PhD, Assistant Professor of Learning Sciences and Technology, University of Michigan. Uses – and sometimes misuses – of AI vary across age, as well as whether individuals are employing AI for personal exploration or for school assignments, says Baron. “When most people talk about generative AI, they are thinking of chatbots that can produce new text or images. However, the same underlying programming scheme now also drives familiar writing functions like spellcheck, grammar and style programs, and predictive texting,” says Baron. A survey of high school students in the US suggests much higher usage of generative AI for personal purposes than in school.
What Are The Risks To Children From AI?
Overtrust In AI-Provided Information
Children use similar comparable strategies when judging the reliability of AI as they do with humans, says Xu, basing their judgment on whether the informant has provided accurate information in the past as well as on the level of expertise they perceive their source to have. “However, it appears that some children may be better at utilizing the strategies to calibrate the trust than others,” she says. Youth with better background knowledge of the subject area of the conversation or more sophisticated understanding of the AI mechanisms will be better at making these judgments. Conversely, children with lower AI literacy may be prone to trust information received from AI without critically evaluating its quality.
Bias Reinforcement
Similarly, much of the information used for training AI systems has come from more historically wealthy and developed nations, leaving out content, representation, and worldview from a significant number of other countries and cultures worldwide. Pizzo Frey notes that “no product is values or morals neutral,” including AI, yet the nature of AI obscures the underlying values and morals contained in the data used to train the technology. Research has shown that unintentional bias in the large language models children use to write essays shapes the positions they take in the essay, says Vosloo. In time, this effect could influence broader changes in how children see the world around them.
Reshaped Social Skills
Some AI products are now incorporating measures to encourage children to use polite language, says Xu. While they may be a step in the right direction, “it also poses a risk of obscuring, at least from the children’s perspectives, the boundaries between AI and humans,” she says.
Underdevelopment Of Foundational Learning Skills
“For AI to be a valuable tool, it shouldn’t just provide easy answers, but rather it should guide children in their journey of sense-making, inquiry, and discovery,” says Xu. There is evidence that when AI is specifically designed to guide children through the learning process, it can be quite effective, she says. However, the most widely available Gen AI systems are not designed for child use, and Xu notes that as of late 2023, OpenAI (the maker of Chat GPT) requires users to be at least 13 years old, with parental consent needed for those between 13 and 18. Baron adds that the spectrum of writing functions today’s AI can serve should not be underestimated. Besides producing new text, writing summaries, or constructing point-by-point arguments, AI is also increasingly taking over basic editing: everything from capitalization, punctuation, and spelling, to grammar and style. “Parents, educators, and students need to think carefully through which of these skills are important to be able to handle on your own and when it’s OK to cede control to a computer program,” she says.
Persuasive Misinformation And Disinformation
Pizzo Frey urges “consumers and children especially must understand, with Generative AI in particular, that these applications are best used for creative exploration, and are not designed to give factual answers to questions or truthful representations of reality, even if they do that a fair bit of the time.” Pizzo Frey advocates for the development of guidelines to create “consistency and reassurance” so that children will know when they are interacting with AI and when they are not.
How Should I Help My Children Use AI Technologies Safely And Responsibly?
Co-Use
With guidance from parents or other caring adults, “it can be fine for even very young children to engage in asking questions through chat bots,” says Xu. “We could consider this as an additional learning experience for children. However, it is very important for parents to be aware of the information provided by the chat bot and if necessary, rephrase or supplement it based on the child’s needs.”
Recognize The Limits Of AI
AI systems are sociotechnical, says Pizzo Frey. That means that “technical excellence alone is not enough” for assessing the quality and impact of AI systems because the technology “cannot be separated from the humans and the human-created processes that inform and develop and shape its use.”
Embrace Curiosity
Children are at the forefront of AI integration into society, both today and in the future, says Vosloo. “So we really need to get this right.” It’s not helpful to take an approach that either overly catastrophizes or glorifies the risks and opportunities of AI. “We obviously need to strike that balance of responsible AI that’s safe and ethical, but also leveraging every opportunity that we have now.”
Understand The Importance Of AI Literacy
Having AI literacy doesn’t mean one has to become a computer coder or technologist, but developing a familiarity with what AI tools can and cannot do, says Bywater. Particularly for educators, Bywater urges collaboration with youth in the work to develop AI literacy. “For educators, it’s operating from uncertainty and the willingness to learn together with students that is a really important goal.”
Build AI Literacy Skills
For information on risks and considerations for use of specific products, there are online tools and product reviews being developed to help assess individual AI products – see the resources section below.