User- and AI-generated internet content is produced every day in staggeringly large amounts – with an accompanying increase in access and exposure to misleading and false information. Are children more susceptible to misinformation and disinformation online than adults? What can parents and caregivers do to help them develop the ever more critical skills needed to evaluate misleading and false content they find online?

What is Misinformation Versus Disinformation?

In a nutshell, “misinformation is just being wrong, and disinformation is lying,” says Imran Ahmed, Founder and CEO of the Center for Countering Digital Hate. Diana Graber, author of“Raising Humans in a Digital World,” and Founder of Cyber Civics and Cyberwise, defines these terms more specifically:

    • Misinformation is false or misleading information not generally created or shared with the intention of causing harm. 
    • Disinformation is information that has been deliberately falsified. It has been created to mislead, harm, or manipulate a person’s social group, organization, or country.

In today’s AI-saturated media landscape, it can be helpful to tease apart misinformation generated by AI and the use of that information for misleading purposes, which is disinformation. “When AI generates wrong information when you ask it a question, it’s badly made – it’s got bad inputs and so it gives you nonsense. That’s misinformation. But when AI tools can be used by people who seek to mislead people, to lie to them, to mislead them with intent – that’s disinformation,” says Ahmed.

Why are Children More Susceptible to Online Misinformation and Disinformation?

Children as young as two are able to attend to the reliability of people in their environment who are sharing information with them, says Andrew Shtulman, PhD, Professor of Psychology at Occidental College.  

While children have a natural ability to evaluate content skeptically, this skepticism is “shallow and can easily be knocked down in the face of a trusted authority or some kind of social cue that you should believe this information,” says Shtulman. The internet is a unique context that can thwart children’s natural defenses against misinformation that they use more easily in the real world, he says. “Online content is framed as verified facts, not hypothetical possibilities children can think about and reject. Online sources of information are often hidden or even fabricated, and children do not have access to the past records of accuracy and reliability for those sources.”

Research has found that when children are searching for information on a specific topic, they’re indifferent to whether the websites they come across contain inaccuracies or exaggerations, notes Shtulman. “Children are also poor at discriminating news from the other things that might appear on a news page, such as advertisements and user comments. Children do seem to be more susceptible to online misinformation than adults.”

AI is Accelerating Production of Disinformation

Today’s popular Generative AI tools are trained on large amounts of unfiltered, biased, and often inaccurate data – inaccuracies that are then reflected in the outputs to users of the AI tool. Some social media platforms are generating their own AI tools based on what users are posting on the platform. “That’s not a great way to train an intelligent tool, because a lot of what is posted on social media isn’t really intelligent now, is it?” posits Ahmed.

The rapid development and accessibility of these Generative AI tools have reduced the cost of production of real-seeming disinformation to zero, says Ahmed. This creates an ever-generating “BS machine” that is “flooding the information ecosystem with nonsense that’s hard sometimes to discern from reality,” he says.

Beware of News from User-Generated Content Sites Like TikTok

Young people are getting their information primarily from social media, says Graber, who shares research showing that 54% of people aged between 15 and 24 receive their information from social media as of 2019. A 2024 study from Pew also found that four in ten young adults regularly get their news on TikTok. However, the number one site “far and away” for young people for news is YouTube, with some teens using it daily, she says.

Sites like TikTok are based on user-generated content, often by young people who “aren’t experts in anything yet,” which gets further amplified by the recommendation algorithm and can lead to a rapid spread of conspiracy theories and misinformation,  says Graber, who shares a few more sobering data points:

    • A Newsguard study found that in their first 35 minutes on TikTok, nearly 90% of participants are shown misinformation from the TikTok algorithm. 
    • During COVID, it was estimated that a quarter of the videos on YouTube were related to COVID misinformation.

User-generated content by nature is not vetted or fact-checked, yet “most children think that information on the Internet is usually accurate and can be trusted without further scrutiny,” says Shtulman.

Recognize the Power of Repetitive Exposure to Misinformation

Repetitive exposure to incorrect information can create an effect where even if one knows better, they might start believing or acting on something they don’t fully believe to be true due to its familiarity, known as the “illusory truth effect,” says Rakoen Maertens, PhD, Juliana Cuyler Matthews Junior Research Fellow in Psychology, New College at University of Oxford. 

Maertens provides an example: “Imagine you’re at the grocery store with your children, and someone suggests buying apples because they’re healthy. But then, you recall seeing multiple TikTok videos and AI-generated images of apples with hidden insects inside, suggesting they might be dangerous. You’ve seen this information several times, and while you don’t fully believe that apples are unsafe, the repeated exposure has made the idea feel more plausible. So, even though you might not completely buy into it, the familiarity with the message influences your decision to skip the apples in favour of something else, reflecting how repeated exposure to misinformation can shape your actions.”

Technical Fluency Doesn’t Mean Skill in Identifying Credibility

Today’s children are often “digital natives” who surpass their parents in technical ability and know-how. However, this technical adeptness does not necessarily translate into skills of identifying the credibility of information. “There is a tendency to assume that children know how to make sense of the information that comes across their screens,” says Ahmed. 

Research from the Stanford History Education Group indicates this assumption is false. A national study asked over 3000 high school students in the United States to evaluate legitimate online sources, including a social media video that claimed to show voter fraud. Despite the easy availability of information online debunking the video, including a Snopes article and a BBC investigation and article, only three students out of 3,000, less than .1%, correctly identified the Russian source of the video, shares Joel Breakstone, PhD, Executive Director of the Digital Inquiry Group (formerly the Stanford History Education Group).

“We want people to be making decisions based on credible and reliable information. If young people are so easily misled – that should give us all pause,” cautions Breakstone.

Teach Children How to Research and Assess Information

Just learning to use tools safely is no longer enough, says Graber. “Kids need to learn how to do information, research how to assess what’s true and not true.” Education into media literacy, which is the ability to access, analyze, evaluate, create, and act using all forms of communication, is essential for today’s youth.

Yet research indicates that this skill development is not happening. “Ninety-four percent of teens say that schools should be required to teach media literacy. Yet only 39% report having had any media literacy instruction at all. We’ve got to equip kids with these skills,” says Graber.

Basic quality assessment of information includes evaluating three dimensions:  accuracy, manipulativeness, and authenticity, says Maertens. Shtulman suggests focusing on trustworthiness of sources rather than the content itself as it can often be difficult to differentiate fake from real within content.

Encourage Development of Open-Minded Thinking

Research has found one of the best predictors of resilience to misinformation is not only rational thinking skills, but having actively open-minded thinking, shares Maertens. What does “open-minded” thinking mean? One – being open to new ideas that are not aligned with your own views, or not in line with your own group. Two – having intellectual humility and the ability to acknowledge that you might not be accurate. 

“Teach people to get along with people that you don’t get along with,” urges Maertens. “If you manage to listen to other people’s views, even if you completely disagree with them, and accept them in your in-group, you will be more likely to come towards a common ground that is closer to the truth on average than if you get polarized.” Many people are trained in high school in a style of argumentation and logic that relies on finding a flaw and attacking it, an approach that has helped lead to a polarized society, notes Maertens. While rational thinking training is helpful and important for dealing with misinformation, Maertens suggests not to forget to bridge divides, even if you don’t agree with someone, or if they are spreading misinformation. Allowing them into your “friend” group can make a difference.

Learn How to Debunk

Familiarizing yourself with how to properly prebunk and debunk online misinformation is a helpful tool parents and caregivers can use to educate their own children, says Maertens, who suggests looking at the Prebunking Guide and the Debunking Handbook as a good first step in this education. Learning to prebunk will help members of the family recognize the elements of misleading or false language or data and categorize it as fake before someone starts believing it. These skills translate across mediums – text, games, video-based, and others, notes Maertens.

Slow it Down – Overcome Intuitive Thinking with Rational Thinking

Cognitively, “we work with two systems,” says Maertens. The “first system” is what we use most of the time for our daily behavior, based on unconscious, intuitive decisions, not analytical thinking. “The ‘second system’ is a slower, rational system that analytically looks at content, but it requires energy. It’s not easy to do. To use that system you need to really sit down and think,” he notes. This second system has to be nurtured and developed through education, though even educated people often revert to “system one” thinking. For children especially, they are even more “system one” based than adults because their “system two” cognitive capacities are not yet as fully developed.

Understand the Pathway from Information Overload to Conspiracy Theory

The deluge of content online has “created an information ecosystem in which truth and lies intermingle seamlessly,” says Ahmed. “So it is unsurprising that kids find it difficult to work out what is true or not, and have no real sense of how they can work out what’s true or not.” The state of  being uncertain about how to work out what the truth is called “epistemic anxiety,” and is correlated with conspiracist thinking, notes Ahmed. “In the chaos of not knowing what’s true or not, being told countervailing things, and not really being able to discern between the two, you end up just looking for something which is solid and seemingly satisfying, a big answer, but actually isn’t. That leads eventually to apathy.”

Cope With Sophisticated AI-Generated Content By Cross-Checking

Older methods of discerning AI-created art are becoming obsolete with the rapid development of GenAI capabilities, notes Ahmed. “We have been telling people to look out for things like three hands or six fingers, and that’s not a healthy thing to do. What about when the AI generator doesn’t generate six fingers? Then people think, ‘Well, that must be real then because AI generates six fingers or three arms.” 

Cross-checking suspicious content is important as content generation tools evolve. “Our eyes deceive us online and the power of these tools is incredible. We don’t want to give students a false sense of security that they’re always going to be able to identify a problematic source,” says Breakstone. “The best approach is to say: ‘If I’m not sure if this thing is real, I should go somewhere else, and try to get a sense of what other sources say about it.’”

Watch How You Feed the Algorithm

People are likely to say they don’t like negative content, but it’s human nature to spread and engage with emotional and moral content more, says Maertens, using their “system one” type of intuitive thinking. Social media platform algorithms are fed by these preferences and will continue to show more negative and emotional content that might be more likely to contain misinformation, he says.

Best Practice: Check Sources Over Looking for Internal Inconsistencies

Research indicates that effective fact-checking involves first leaving an unfamiliar source to find out who is behind it, instead of spending a lot of time reading closely, says Breakstone. Teaching these strategies to youth is critical to “up their game” to deal with misinformation, he says. “We want students to be able to figure out quickly: What is this thing? Do I know what I’m looking at?” 

Breakstone describes a study conducted by the Stanford History Education Group where three different groups of people were asked to evaluate unfamiliar online sources: Stanford University students, history professors, and professional fact checkers from prominent news organizations. The students and historians tended to carefully read through a new source first, while the fact checkers immediately left the unfamiliar source and searched online to find out who was behind the information. In one example, the fact checkers were able to quickly establish that a website advocating for lower minimum wages was run by a PR firm working on behalf of the food and beverage industry. 

“What was the thing that distinguished the fact checkers more than anything else? They understood something that seems a little bit counterintuitive – on the Internet, in order to understand a source, you have to leave it,” says Breakstone.

Investigate Motives Behind Information Sharing

The natural human desire to belong to a group plays into both intentional and unintentional sharing of misinformation, says Maertens. “People want to belong to groups.” This desire may lead them to share or create misinformation even when they don’t believe it in order to belong to a desired group. “This is a huge factor in the fight against misinformation – you might teach [children] rational thinking skills, but if they want to belong to a group, it might be beneficial for them to spread misinformation,” he says.

Take the Misinformation Susceptibility Test

Think you’ve figured out how to spot misinformation online? Test yourself with the two-minute online Misinformation Susceptibility Test Maertens developed which will give you a score on each dimension of susceptibility to fake news headlines. Data from test takers so far indicates that those scoring lowest on the test use Snapchat, TikTok and Instagram as their primary sources for news, notes Maertens.

Find and Show Children Trusted, Quality Information

Instruction in spotting misinformation and disinformation could lead to an undesirable outcome where children’s skepticism turns into a blanket belief that nothing can be trusted, says Breakstone. Cynicism and apathy may lead to dismissing information as “fake” before proper evaluation. To counteract this, make sure to show youth instances where they can find quality information. “We want students to understand the incredible power that is at their fingertips,” he notes. “Make sure that we provide opportunities for students to see the powerful ways in which they can become better informed using online sources, and then also not to fall into a dichotomy of sources [as]  ‘real’ or ‘fake,’ but rather so much of this stuff is in the mucky middle.”

Ask Basic Questions About Content Perspective

Teach yourself and your children to ask basic questions to evaluate the perspective behind a piece of content, suggests Breakstone: 

    • What is the perspective of the source? 
    • How might that perspective influence the content of the message that they’re delivering?
    • What’s the authority of the source to speak about a particular topic? 
    • Are they well suited to give me information that I could trust about this topic? 

The answers to these questions may not be black and white – “they exist on a scale and we should think about them and reason about them in each instance,” says Breakstone.

Start Media Literacy at a Young Age

The news environment for today’s youth is much more complex than that for earlier generations, who may have gleaned most news from a few main TV channels that had robust vetting protocols. As a result, it’s becoming more important to train children to think more deeply about knowledge at a younger age, suggests Shtulman. Even young elementary-age children are able to understand basic distinctions such as the difference between a fact and an opinion, or a fact and a statement of values, and can start grappling with these questions, he says.

Limit Exposure to Misinformation and Disinformation

At a basic level, you can reduce child exposure to vast amounts of misinformation and disinformation online by limiting their access to the social platforms themselves, says Ahmed. “Let me tell you a dirty secret of Silicon Valley – most of the people here would never let their kids, when they’re young, spend time on [social media] platforms. That should be an indication itself as to whether or not they think they’re safe for your kids and my kids.”

Support Educator Access to Evidence-Based Curriculum

Outside of the home, Breakstone notes the importance of adequately supporting educators by supplying them with ready-to-use curriculum on misinformation and disinformation, as well as professional learning opportunities to assist them in using those materials effectively when educating students. Make sure these materials are evidence-based in order to maximize efficacy, he notes.

Advocate for Better Standards of AI Training

Today’s GenAI tools have been trained on vast amounts of internet data, much of which is full of factual errors and bias itself. Ahmed urges us all to advocate for standards of expertise in AI tools in order to build up more trust in the content they generate. “If you want to train an AI to act like an expert, it should be trained by an expert of that field. Constraints on the models’ outputs should be put in place to ensure that they stop anything being generated that is harmful or misleading or dangerous. Implement error correction mechanisms to fix issues when they arise,” he suggests.

Related Webinar

This tip sheet was created based on content shared at the #AskTheExperts webinar “Unreal: Online Misinformation, Deep Fakes, and Youth” on November 19, 2024. Watch the recording, read the transcript, and view related resources.