Large Language Models (LLMs) like ChatGPT are a type of AI. Unlike other software which is programmed, LLMs gain their considerable abilities through multiple stages of training. Learn more: What is Generative AI?
This process is like growing a computer mind in a digital petri dish of human data. Risks to children can emerge at different points during development. For example, AI that is trained on adult content will respond in adult ways, potentially leading to unsuitable interactions with children. Additionally, AI may be inaccurate or manipulated to create misinformation, and without proper human feedback, it may respond inappropriately in everyday conversations with children.
It is generally considered safe for children and adolescents to use large language models, with appropriate supervision. There are child-oriented platforms that put extra filters in place between the user and the LLM to better tailor interactions to the needs of children. However, the development of this type of interactive intelligence presents unique challenges for parents and educators. The age and developmental stage of the child and characteristics of AI platforms should be considered when making decisions about how children use AI.
Aspects of current and upcoming AI systems to consider in regard to potential risks:
- Identifying Sources: It is becoming harder to distinguish between content created by AI vs. that made by humans. “AI detectors” can be unreliable at best and will become even less effective as AI gets more advanced. Over-Trusting: AI can be a good source for information, but it is not always accurate. AI systems like ChatGPT are trained to converse with users in a convincing way rather than provide completely factual content. Children and teens may be especially vulnerable to over-trusting AI sources.
- Misinformation: AI can reuse, amplify, and even generate incorrect information. Malicious actors or bots may use AI to increase the spread of misinformation, fake news, or unverified content online.
- Thought Replacement: AI thinks differently than humans, and in some ways already exceeds human intelligence. LLMs like ChatGPT and Claude can produce new, complex thoughts at superhuman speeds. This development does not mean human thought has become useless or obsolete. When used in learning contexts, AI should support and enhance students’ critical thinking, not replace skill development.
- Bias: LLMs are trained, in part, on human data and therefore may reflect human biases and stereotypes. These stereotypes may be problematically reinforced by the perceived authority of an AI source.
- Inappropriate Material: AI is often trained on material produced by and for adults. While most AI platforms include safeguards, care must be taken to ensure these safeguards remain in place and are effective, especially with teens who may find ways around them.
- Rapid Changes: Compared with other technologies, AI is rapidly changing. It will be challenging for parents and caregivers to stay abreast of the emerging risks and dangers.
Protecting Children from AI risks
As with any technology, there can be ways to safely engage. For AI, experts generally recommend staying informed of the types of tools kids are using and how, introducing children to new tools slowly and with careful guidance, and focusing on the development of key AI literacy skills, including knowledge of how AI functions, awareness of risks, identifying biases, and verifying information.
For more information on the possible risks of AI, as well as tips for helping your child use AI safely, “Youth and Generative AI: A Guide for Parents and Educators”.