Spotting deep fakes and misinformation online is increasingly difficult for adults–so what about children? 

Children and Screens held the #AskTheExperts webinar “Unreal: Online Misinformation, Deep Fakes, and Youth” on Tuesday, November 19, 2024 at 12pm ET. A panel of cognitive development and digital literacy experts explored how children perceive reality, factors that may make them cognitively susceptible to misinformation, and what parents, caregivers, and educators should know to help children think critically and develop skills to navigate misinformation and deep fakes in their digital lives.

Speakers

  • Diana Graber

    Author, "Raising Humans in a Digital World,"; Founder, Cyber Civics and Cyberwise
    Moderator
  • Imran Ahmed

    Founder and CEO, Center for Countering Digital Hate
  • Joel Breakstone, PhD

    Executive Director, Digital Inquiry Group
  • Andrew Shtulman, PhD

    Professor of Psychology, Occidental College
  • Rakoen Maertens, PhD

    Juliana Cuyler Matthews Junior Research Fellow in Psychology, New College, University of Oxford

00:00:11 – Introductions by Executive Director of Children and Screens Kris Perry

00:01:40 – Moderator Diana Graber introduces the topic of misinformation, including key definitions

00:09:07 – Rakoen Maertens on the cognitive science of misinformation, including why people create, spread, and believe misinformation, and how to protect against it

00:18:27 – Moderator follow-up: What specific skills do kids need most to prepare themselves for evaluating misinformation? What should elementary or middle schools focus on?

00:20:48 – Andrew Shtulman on how children’s cognitive development impacts their processing of real and fake information online

00:30:18 – Moderator follow-up: At what age do children begin to understand digital reality versus manipulated content?

00:32:45 – Imran Ahmed on deepfakes, AI, and other forms of disinformation created and disseminated by bad actors, their impacts on youth, and how society could address it

00:48:11 – Moderator follow-up: What advice would you give to parents of children with YouTube and TikTok channels?

00:49:47 – Joel Breakstone on youth digital literacy and how children can learn to determine fact from fiction online

01:00:15 – Moderator follow-up: Is there a push for Congress or schools to mandate digital media literacy lessons?

01:02:08 – The panel addresses questions from the audience.

01:02:21 – Q&A: How can people recognize mis/disinformation when there’s a kernel of truth in it?

01:04:26 – Q&A: What would you tell a teen to look for when determining whether something is a deepfake?

01:05:57 – Q&A: How do we support children who feel they cannot trust anything, whether it is a trusted source or not?

01:13:07 – Q&A: What are developmentally appropriate ways to introduce and discuss this with children of different ages?

01:15:57 – Q&A: How do you teach youth to learn from the news or fact-check when so much is behind a paywall?

01:18:55 – Q&A: Is active, open-minded thinking more of a skill or personality trait?

01:20:39 – Q&A: Should parents keep their children away from media entirely?

01:23:23 – Closing remarks from Children and Screens’ Executive Director Kris Perry.

[Kris Perry]: Hello and welcome to today’s Ask the Experts webinar Unreal: Online Misinformation, Deep Fakes, and Youth. I’m Kris Perry, Executive Director of Children Screens: Institute of Digital Media and Child Development. The ability to discern real information from fake information is more important than ever. However, spotting deep fakes and misinformation is a challenge even for adults. In today’s webinar, a panel of experts will discuss how children understand reality and truth, how their developing cognitive skills play a role in how parents, caregivers, and educators can help them think critically to navigate, misinform information and deep fakes. Now, I would like to introduce you to today’s moderator, Diana Graber. Diana is the author of Raising Humans in a Digital World: Helping Kids Build a Healthy Relationship with Technology. She is the founder of two organizations devoted to digital literacy, Cyber Wise and Cyber Civics. Diana developed and teaches a popular middle school digital literacy program, currently being taught in schools across the United States and internationally. She has appeared on and has been interviewed by The Today Show, NBC Nightly News, The New York Times, Wall Street Journal, and more. Welcome, Diana.

[Diana Graber]: Thank you, Kris, and thank you so much for inviting me to be here today to talk about a subject that I think is really, really important. Before I introduce the wonderful panelists, I wanted to share a personal reflection on how important this topic is to me. I was thinking back to 2011 or so when I first started working with kids on digital literacy, and at that time we were really focused on digital citizenship, which is the safe and responsible use of tools. And, you know, we thought that was great, but it’s really just the tip of the iceberg of what kids need to learn, which became immensely clear to me as I worked with these kids. At that time, some eighth graders were presenting some research projects that they had worked on for quite a while. And so I was listening, and one of the girls, her topic was on human trafficking. And in the audience I heard her repeat what I knew to be a conspiracy theory. And later I learned that she had done her research primarily on YouTube. Because she figured if it was on YouTube, it must be true. So, that really underscored to me that, you know, just learning to use your tools safely is not enough. Kids need to learn how to do information, research how to assess what’s true and not true. That was a long time ago, 2011, today kids are dealing with AI and all these new technologies. So this topic is super, super important. Alright. So I’m going to set the stage today by talking definitions. So let me share my screen. Alright. So, just a few definitions here. We know that misinformation is false or misleading information not generally created or shared with the intention of causing harm. Disinformation is information that has been deliberately falsified. It has been created to mislead, harm or manipulate a person’s social group, organization, or country. An example here from the Hurricane Milton. Hurricanes often seem to give us a lot of fodder for most misinformation and disinformation, which leads us to the deep fake. This is an image or video that has been altered to misrepresent, someone is doing or seeing something that was not actually said or done. Here is a deepfake that was shared widely during our most recent hurricane. Maybe you saw it. We know that this is becoming more and more of a problem because between 2019 and 2023, the number of deepfakes online increased by 552%, with research indicating it’s going to increase even more. In this survey, 50% of participants rated deepfake content as probably real or definitely real even after being informed that there was a chance they’d seen a deep fake. So here’s an example of:

[Video]: “…One example. What you’re watching right now. I’m actually not Jake. I’m a deep fake Jake. A deep Jake, if you will. Created by comedian Danny Polishchuk, who has only been using AI for two weeks. The fact that I seem so real suggests that real mischief and serious damage could be done with this technology, were it to be so utilized in the US as it has been used for bad purposes abroad.”

[Diana Graber]: So yeah, pretty good right? Okay. So, we know that young people are getting their information primarily from social media as this chart will show you. 45%, people aged between 15 and 24 receive their information from social media. That study was done a few years ago. Here’s a more recent study just put out by Pew Internet. 4 in 10 young adults in the U.S. now regularly get their news on TikTok. Pretty crazy right? As I always tell students, remember, TikTok is very young users. It’s user generated content. Generally, those users aren’t experts in anything yet, so be aware of what you find out on TikTok, right? Okay, so where are kids? Where else are they going for information? Well, number one by far and away is YouTube. Still, 95% of teens use YouTube. Some say they use it almost daily. And then kind of significantly behind that are the big three: TikTok, Instagram and Snapchat. Again, these are all user generated networks. Okay. So, a Newsguard study analysis finds that in their first 35 minutes on TikTok, 88.8% of participants are shown misinformation thanks to their awesome algorithm. YouTube, during COVID, it was estimated that a quarter of the videos related to COVID contain misinformation. This platform can amplify misleading content through its recommendation algorithm. This can lead to the rapid spread of conspiracy theories and misinformation. I’m going to show you an example. So, I pretended to be a student doing a research paper on the moon landing. So, the first video I thought, “look, a legitimate video,” but very suddenly I fell into the YouTube rabbit hole. These are the recommendations that YouTube gave me. Did Stanley Kubrick fake the moon landings? I don’t believe the moon landing happened. The moon landing conspiracy. So you can see what happens to kids, you know, unwittingly, they start getting these recommendations from YouTube. And we know this to be true. This is research from the News Literacy in America project. 80% of teens see conspiracy theories on social media, including the idea that aliens live among us. Keep that in mind for a moment. About half reported seeing them at least once a week. Of the teams who reported seeing conspiracy theories, 81% said they believed at least one. So what do we do? So I really think, you know, the best weapon we have against this, especially with gen-AI and all this stuff coming at us, is to equip kids with education. And media literacy is one way to do it. This is the ability to access, analyze, evaluate, create, and act using all forms of communication. But this is also from the News Literacy in America report. 94% of teens say that schools should be required to teach media literacy. Yet only 39% report having had any media literacy instruction at all. And at least one class in the last school year. So that is just where we are on this. We’ve got to equip kids with these skills. So we actually went full on after my little experience in 2011, we added a whole year of information literacy, a whole year of media literacy, all these lessons on algorithms and AI and filter bubbles and the things that kids really need to know about. And, I laughed when I read that research about aliens among us, because we actually started offering lessons for elementary school kids on the same topics. I’ll just play a moment of this for you.

[Video]: “Whoa. It says right here that aliens just landed. I can’t believe this. Turns out, I really couldn’t believe it because it wasn’t true. Did you know that not everything you see online is true?”

[Diana Graber]: So, I’m going to stop it there, because we have a lot of great people to get to today. I believe that video will be offered in the resources if you want to watch the whole thing later. So, okay. Dr. Rakoen Maertens–hopefully I said that correctly–is the J.C. Matthews Research Fellow in Psychology at New College. He completed his BSc and MSC in experimental psychology at the University of Ghent. He then pursued his Ph.D. in psychology at the University of Cambridge and Professor Sander van der Linden’s Lab, focusing on how to apply psychological science to fight fake news and misinformation, for which he received the CSAR Award for Applied Research. He co-founded the Cambridge University Behavioral Insights Team and the Oxford Behavioral Insights Group, and he currently serves as the representative to the United Nations of the Society for Personality and Social Psychology. Thank you so much and take it away.

[Dr. Rakoen Maertens]: Thank you so much for the introduction. Okay. So I am going to give a quick introduction into the cognitive science of misinformation susceptibility. So, I am a psychologist, I am also a behavioral scientist. And really eight minutes is not enough to give you a good introduction. But I will just flag some concepts so that you can think in the psychologist toolkits on this issue. Right. So let’s start with a concrete example. Imagine that you go to the shops and you want to buy some groceries. And you are there with your children. And someone in your family has this idea, let’s buy apples. And you know, apples are healthy, good for the winter. Now that everyone is getting ill, maybe get your apples. But maybe on TikTok or anywhere you’ve seen a video about–that apples are actually full of dangerous insects hidden in it, and you might get infected. Also, maybe AI-generated photos and so on, and you might have been repeatedly exposed to this. You might not believe it. You might say, “This is nonsense!” but the repeated exposure might settle this in your mind. And then next time when you’re in the shops, you have these two ideas. You have these healthy apples. But maybe that AI-generated image might also come up to your mind. And what often happens then? You might change your behavior while not changing your belief. You might still not believe the misinformation, but because you have this image of this badly looking apple, you might still skip it. And this is an example of the illusory truth effect, where you repeatedly get exposed to something and you might start believing it or even if you don’t believe it, you might still act upon it, even if you are not fully onboard. And then undoing the damage is another one. Imagine that your kid believes this, that apples are dangerous, and then you tell them, no, no, it’s not true. It might still be very hard to undo the damage. There might be this link settled in the brain. Once the damage is done, it’s hard often to undo it. As a behavioral scientist, someone that was also interested in this link between what you believe and so your actual behavior, this headline is, for me, a very good example of how that link exists here. People put cell phone towers on fire because they believe in a COVID conspiracy theory. So, one wouldn’t put a cell phone tower on fire if it wasn’t for a good reason. I hope. Okay. So in general, psychologists again, very simplified, but think of the brain as that–we work with two systems. We have a first system, that is what we use most of the time. We make intuitive decisions all the time. The way we walk, how we behave ourselves in most situations. We don’t really think about this in an analytical way. We just do it. It’s an unconscious, efficient system, and we use it most of the time. System two is a slower, rational system that analytically looks at content, but it requires energy. It’s not easy to do. To use that system you need to really sit down and think. And of course, system two is something you have to nurture and develop, and education is important for that. But even if we are educated, we might have learned system one, and especially children, they’re even more system one based because the system two is not so strong yet. Now, if we look at information, we have to evaluate all these dimensions: accuracy, manipulativeness, authenticity. Is this a high quality source? What is the quality of this data? There’s a lot of things going on. And at the same time you have information overload. We see that partially due to algorithms, partially due to human nature. We spent more time on negative content. We don’t like negative content, but we spread emotional and moral content more often and we engage more. So that boosts the algorithm. And this is largely driven by system one thinking. Now there are all kinds of interventions, of course, that activate your rational thinking systems. For example, accuracy and nudges were first used on Twitter where, when anyone tries to share a potential misinformation message, Twitter might have asked you–X still has some of these left–might ask you, do you really want to share this? Do you really want to like this? And that stop-pause moment could help you activate system two, but it is difficult and the long term effectiveness is disputed. Another big factor, again system-one-based, is identity. People want to belong to groups, right? And a lot of people share misinformation, or create misinformation even, even if they don’t believe it necessarily. They just do it often because they want to belong to a group. So whether it’s Democrats or Republicans, or in high school, you want to belong to this group of people that are the cool kids in school. Maybe what you have to do is share information that aligns with that group’s vision, but even if you don’t believe it. And this is a huge factor in the fight against misinformation, you might teach them rational thinking skills, but if you want to belong to a group, it might even be beneficial for yourself to spread misinformation. So, how do you balance these priorities? An interesting variable that I want to mention quickly. In my research, I found that, surprisingly, one of the best predictors of resilience to misinformation is not only the rational thinking skills, but also being open minded and this actively open minded thinking. This means two things. Being open to new ideas that are not aligned with your own vision, or not in line with your own group. And being humble –intellectual humility– and being able to acknowledge that you might not be accurate. This is a good skill to have actually, and could help you to get stuck in a really robust conspiracy. Now, if we work a lot on interventions to counter misinformation, you might think of this yourself. Whether you’re a kid or an adult, you can use many of the tools that we’ve developed as psychologists. But just to give you a quick idea, debunking is really difficult, right? Because of its continued influence, but also you might by accident repeat the myths, or someone might try to debunk it that you don’t like. Then, this clash of visions about the world might make you distrust the fact checker and so on. Luckily, there are some tools. So, this is the first step: have a look at the Debunking Handbook. It gives some really good tips on how to do debunking properly. You could also do it during exposure to misinformation, like with the nudges that I mentioned. Or you could indeed educate, and one of those methods is inoculation theory. No time to explain, but in a nutshell, the idea is that you can teach people how to recognize elements of misleading language or false language or potentially bad data, and you can do that at any age, and then you can teach them to recognize that and categorize it proactively as potentially fake before someone starts believing it. And you can do this in text-based form, gamified, video-based and so on. And the gamified form is especially fun to do in classrooms, right? Or with your kids at home and so on. So, really good resource for educators. So, an educator sheet as well. And there’s a– some of these interventions have a version specifically for kids. So “The Bad News Game”, for example, the 15 minutes fun game to play, has a Bad News Junior version. That’s just an example. And typically we teach people about different techniques of manipulating people online so that you don’t fall for it. Many different languages. Okay. There are many different types of similar interventions: Cranky Uncle, Spot the Troll, and so on, and videos as well. So these videos are also fun to watch and discuss. So if you need any of these resources afterwards, do let me know. Okay. A final fun thing is I developed a test, the Misinformation Susceptibility Test, and you can try the test online. If you go to this link in a classroom, I would now have a QR code and we would have a moment to discuss the results. But feel free to do it and maybe post a result in the chat. It’s a two minute test, and you basically have to categorize real fake headlines, and then you get a score on each dimension of your susceptibility, and you can compare it with others. And it’s also fun to do in school of course. I’m going to skip, because I’m out of time. But in general, the test shows that people actually–older people are surprisingly better at this test. So, there’s this big debate going on– are younger people or older people more susceptible? It seems to be younger, at least according to this test. The final thing has not, besides probably different dimensions of susceptibility. But what we do see is that the lowest scoring on the test use Snapchat, TikTok and Instagram as their primary sources for news. So, from the brief, from the introductory slides we just had, this seems to be a big problem. Okay, for more information, read the book: Foolproof. I can recommend it. It gives you a full psychological introduction into the topic. And remember, from a psychological perspective, some the big factors are the social identity, the reasoning skills, the system-two thinking, and the open-mindedness. And really, the paid inoculation games encourage this norm of intellectual humility and also focus on building, bridging visions rather than just attacking opponents. Okay, right? Any questions? Oh no, there was a structure for this. Right.

[Diana Graber]: Well I have a lot of questions. I really love that presentation. And, the open mindedness thing. I’d never really considered that before. So, that kind of leads into one of the questions I wanted to ask you regarding youth. In your view, like, what specific skills do you think kids need most? To prepare them to kind of deal with all this stuff? Like what would parents– what would you suggest they would zone in on, maybe at the elementary level or the middle school level?

[Dr. Rakoen Maertens]: Well, I’m going to say something. So I mean, there are many things you could expect me to say, like certain types of media literacy skills and so on. But I actually would, will say something completely different. I will say actually, teach people to get along with people that you don’t get along with. So, if you manage to listen to other people’s vision, even if you completely disagree with them, and accept them in your in-group, you will be more likely to come towards a common ground that is more closer to the truth on average, than if you get polarized. So, a lot of– I feel a lot of training on argumentation and logic is about okay, finding where is the flaw and attacking it, right? And that’s something that I got in my high school. I got a rational thinking lesson, but I think what I really–the mistake there was, this can lead to more polarization. And if you have a polarized society, you will have less and less mutual understanding and you will go for further in your silos. So, my thing would be really actually play the games and do the rational thinking training. But also don’t forget to focus on bridging divides. And even if you don’t agree with someone, even if someone is spreading misinformation. Allow them and your friend group whether they’re kids or or adults, the same mechanism applies.

[Diana Graber]: That’s a great tip. It really reminds us that so much of this are social-emotional skills, how we think about other people having empathy and understanding, and all that. So, thank you for that reminder. Great presentation. Alright. So, now we’ve got Dr. Andrew Shtulman. He is a professor of psychology at Occidental College where he directs the Thinking Lab. Dr. Shtulman–I hope I’m saying that correctly–is a cognitive developmental psychologist who studies the development of intuition, imagination, and reflection. He earned a B.A. in Psychology from Princeton and a Ph.D. in Psychology from Harvard, and is the author of Science Blind: Why Our Intuitive Theories About the World are So Often Wrong, and Learning to Imagine: The Science of Discovering New Possibilities. So, welcome.

[Dr. Andrew Shtulman]: Thank you. Thank you for having me. So, today I’m going to talk about children’s susceptibility to online misinformation. And by children, I mean, elementary school age children. Alright. Here’s a story that circulated far and wide on the internet several years ago. “California newborn becomes first baby to be named an emoji”. Her name is “heart eyes heart eyes heart eyes” from “prettycoolsite.com”. This story is not true, but it was shared thousands of times and liked tens of thousands of times. And chances are, some of the people who liked and shared the story were children. So these days, children are on the Internet at increasingly earlier ages. 89% of elementary schoolers have regular access to the Internet. But while they’re on the internet, they don’t necessarily understand what it’s about. So, most children think that information on the Internet is usually accurate and can be trusted without further scrutiny. Other research has found that when children are searching for information on a specific topic, they’re indifferent to the websites they come across, whether they contain inaccuracies or exaggerations. Children are generally poor at identifying satirical web content, like, a web page devoted to the Pacific Northwest tree octopus. Most of the children who went to this site had no idea it was a joke. And then, children are also poor at discriminating news from the other things that might appear on a news page, such as advertisements and user comments. So, children do seem to be more susceptible to online misinformation than adults. And why is that? One possibility is that children are especially prone to believe content. They just believe everything. Another possibility is that children are especially prone to believe sources in information. They believe everyone. But, decades of developmental research suggests that neither of these possibilities are correct. So, when it comes to evaluating content, children definitely do not believe that anything is possible from as early as you can ask children about physical possibility. They deny that things that violate physical laws could happen in the real world. In fact, young children are actually too skeptical when it comes to assessing possibility. They not only deny the possibility of genuinely impossible things, but also things that are merely improbable. So, if you read a story to a child, a story that includes impossible events like eating lightning for dinner, they’ll claim that couldn’t happen in the real world, it’s impossible. But if the story also contains just unusual things that don’t violate physical laws, like someone finding an alligator under their bed, they also claim that that isn’t possible. So, children know that fantastical events cannot happen in the real world. What they need to learn is that some unexpected events can. When it comes to evaluating sources, children are quite adept at tracking the reliability of informants, or people in their environment who are sharing information with them. As early as two, children are attending to informant– an informant’s accuracy, knowledge, competence and confidence when deciding whether to trust them, and this has been shown in a selective trust paradigm, where children are introduced to two informants and given some information about their reliability. For instance, the informants might see a common object and are asked to label it, and one informant provides the correct label and another provides an incorrect label. And you do this for several trials, and then, the informants see a novel object, and they also label it, say it’s a blicket it or it’s a dax, and then if you ask the children what they think the novel object is, they’ll side with the informant that proved accurate in the past trials. So, children are neither credulous nor gullible, not even at very early ages. But the Internet is a special place. The Internet thwarts children’s natural defenses against misinformation that they might use in real world contexts. So online, content is framed as verified facts, not hypothetical possibilities they can think about and reject. Online sources of information are often hidden or even fabricated, and children do not have access to the past records of accuracy and reliability for those sources. So how can we help children become better consumers of information online? In my lab, we’ve been asking children between the ages of four and 12 to judge the veracity of real and fake news stories that were found on the fact-checking website Snopes.com. We’ve also been attempting to improve their judgments with training. So, here’s one of the true stories we’ve been presenting to children: “A walrus named Freya is sinking boats and causing mayhem in Norway” from HuffPost.com. Here’s one of the fake stories: “Disney files patent for roller coaster that jumps track” from MousetrapNews.com. And, in addition to asking them to judge the veracity of these stories, we’ve also been encouraging them to think more deeply about the content of news stories, in particular to ask themselves, “Does this story make sense given what I know about the topic?” And, “Does this story make sense given what I know about the world in general?” For other children, we’ve been encouraging them to think more deeply about the sources of the story. Asking themselves, “Does this story come from a professional news organization?” And if I don’t know, or I can’t tell, is it reported neutrally and objectively? We haven’t found that the– very many differences between the trainings. So when it comes to judging fake news, children are actually pretty good at realizing that fake news is false. 67% of the time even before training, they judge fake news as false, and that number increases to 76% after training. But the catch is that they’re also judging real news as false. So only 35% of the time do children judge the real news as true before the training, and that number drops to 24% after the training. So our trainings are making children more skeptical but not more accurate. So, their overall accuracy across the two types of news is 51% prior to training, and it stays around that number after training. So you might wonder, is this a problem? Is it a problem that children are erring on the side of rejecting real news rather than accepting fake news? We think it is, because fewer children are actually differentiating the two types of news, which implies that their judgments are rather shallow–their gut reactions to some superficial detail in the story. Among the 4 and 5 year olds, we see that only 20% are differentiating the two types of news in the correct direction. Among six and seven year olds, it’s only 28%. Among 8 to 9 year olds, it’s 35%. Among 10 to 12 year olds, there’s a jump up to 61%. But across the entire set of stories, the differentiation is quite weak. So if children are making shallow judgments, this is a problem because these judgments are probably easily overridden in real world contexts by social cues, like how often the story is liked, how often it’s shared, whether it’s been shared by a trusted informant like a family member or friend. So then what’s missing? What do children need to know to differentiate real news from fake news? We’ve been implementing these exact same studies with adults, and the results have been providing some clues. So we find that the content training does not help adults differentiate fake news from real news, just like it doesn’t help children. It just makes adults more skeptical of all news. But source training is helpful for adults. So when we give adults source training, first of all, before any training, 81% of the time they judged our fake stories as false. And then after training that jumps up to 90%. For real news, 73% of the time, they judged our real news stories as true prior to training and that also increases up to 79% after training. So overall accuracy for adults before training, given our particular story set is 77%. But by reminding them that they should be paying attention to the sources, their accuracy increases to 85%. So I think that’s really the critical difference between children and adults. I mean, adults aren’t great consumers of news online, but they do have a knowledge base of sources that they can draw on to decide, you know, what’s reliable and what’s not. And children do not have that same knowledge base. So, you know, from my perspective, I think media literacy education should be focusing on sources, not content, because content does not provide sufficient leverage for differentiating fake news or real news. Unexpectedness might be the primary sign that a fake news story is false, but it’s often also the reason why real news is in the news at all. What makes it newsworthy? So some open questions we’re pursuing now is what information about sources might be most helpful to children? Perhaps, it’s general information about journalistic practices and how to report stories, write stories, vet stories. But maybe it’s something more specific about the news outlets themselves, which ones are reliable and which ones are not. Alright. So, I will end there and open the floor to questions.

[Diana Graber]:  That was great. Thank you so much. I think that getting that focus on sources is so great–but such a great recommendation because it’s so simple. You know, even a small young child, you could say, okay, well, who was the author of this? Who is this person? Is it a random person? Is it an expert? So I think that source bit is so helpful. But there’s another question here I wanted to ask you, “At what age do children begin to understand the difference between reality and digital manipulation rates?”

[Dr. Andrew Shtulman]: Right. You know, young children’s perceptual abilities are nearly as good as an adult. So, if the digitally manipulated content gives any cues perceptually, kids can pick up on those just as well as an adult can, you know, if there’s if it’s a video where the motion is jerky or, you know, the hands are wrong in the video, kids will see that just like adults will. But if it’s, you know, perceptually convincing to an adult, it’s going to be perceptually convincing to a child. And so I think then you would see differences in acceptance because adults might start questioning, “Where did this come from? Why am I seeing it? What are the intentions behind the picture, the video, whatever it might be?” Where as you know, children are less likely to do that. And so, you know, it’s in probably the middle school years where you see maybe the biggest transition, but, from the young children who are not making much of a differentiation at all between real and fake news, and in adolescence, that’s emerging.

[Diana Graber]: The skepticism is coming in.

[Dr. Andrew Shtulman]: Well, right. I mean, they’re always skeptical, but the skepticism is shallow. So, I think that’s important to remember that children are not just open minded, accepting everything. They’re actually skeptical about everything, but it’s a shallow skepticism that can be easily knocked down, in the face of a trusted authority or some kind of social cue that you should believe this information.

[Diana Graber]: Alright. Great. Well. Great information. Thank you. All right. So, we’re going to move on now to our next presenter. Next up: We have Imran Ahmed. Am I saying that correctly? Hopefully, give me the nod. Okay. Thank you. He is the Founder and CEO of the Center for Countering Digital Hate US/UK. He is an authority on social and psychological malignancies on social media, such as identity-based hate, extremism, disinformation and conspiracy theories. He regularly appears in the media and in documentaries as an expert in how bad actors use digital spaces to harm others and benefit themselves, as well as how and why bad platforms allow them to do so. Imran also advises politicians around the world on policy and legislation. Welcome.

[Imran Ahmed]: Hi. Thanks so much for the kind introduction. My name is Imran Ahmed. I’m, CEO and I’m Founder of the Center for Countering Digital Hate. What the center does is try to understand the ways in which social media, and new tools like A.I., can actually cause real-world harm. And AI is a growing area of our focus. We’re trying to monitor the risk landscape by testing the tools, testing their capabilities, testing their guard rails, and also understanding the ways in which they are spread that the content that they produce is being spread on social media, which is, of course, the most powerful tool invented in history for the fast, almost cost-free distribution of information. Whether that information is right or not. One of the things that unifies both these types of platforms, social media and A.I., is that it’s become really clear over time that they aren’t designed with safety in mind. They aren’t actually designed with the intent to create good information, to create, you know, they’re not about being correct or accurate or teaching us about reality. They’re entertainment platforms. That’s how they make their money. You use them lots and you watch ads, and that’s how they have made themselves incredibly, immensely, unbelievably wealthy. And what we’ve realized over time is, well, I’ve been doing this work for eight years now, that platforms know they are harming people. They know they’re harming kids. Their internal studies, as brought forwards by whistleblowers, show that they know what they are up to, they know the harm that they can cause, and they have a model when they deal with criticism of denying, delaying, and deflecting. In some instances, you know, even more than that. Let me speak briefly about disinformation and misinformation. These are terms that you’ve heard being used today and they can be a little bit confusing. I don’t like terms like that. I find them–that in some respects they’re almost exclusionary because they sound scientific. But misinformation is just being wrong, and disinformation is lying. You know, the intent is there and being wrong–folks are wrong all the time. I’m wrong. I mean, ask my wife. But lying is a sin–sinful thing to do. And I think when we’re dealing with A.I., it’s really interesting as a thought experiment to think to yourself, to what extent is A.I. is–when it generates wrong information, for example, a platform and you ask it a question. It’s badly made, it’s got bad inputs and so it gives you nonsense. That’s misinformation. But then A.I. tools can be used by people who seek to mislead people, to lie to them, to mislead them with intent and that’s disinformation. Let me speak about the core problem for a moment of generative A.I. models. So, the core problem with A.I. and the reason why it’s becoming such a sort of a cultural phenomenon, and our understanding of the risk, is because, like social media, this is about the creation of tools that operate for zero cost. And we’ve talked about that being called friction before, a lack of friction. So, social media’s great innovation is that it allows the distribution of a message to each additional person and each additional message for zero cost to the sender. And that’s unique in history. We’ve never had a way of sending a billion people a billion messages without any cost to the sender themselves. Apart from the time and effort it takes to produce it. Along comes generative A.I., and that reduces the cost of production to zero. So, you’ve got a system in which you can produce stuff and distribute it for zero cost to the originator and that’s really creating a perpetual B.S. machine and that’s problematic. It’s flooding the information ecosystem with nonsense that’s really hard sometimes to discern from reality. We’ve talked earlier about being humble about this sort of stuff. There’s a thing I do whenever I’m doing presentations like this in a room of real people–I’m sure you’re all real, but you know what I mean– and I ask the question, now what vegetable gives you better night vision? And everyone knows I’m up to something. And they go, “Well, I know the answer, but I’m not going to say it because he’s up to something.” And I am up to something. And then eventually, someone brave pipes up and goes, “Carrots!” And I tell them that that was actually World War II disinformation produced by the British Secret Intelligence Service to hide the fact that we developed plane-mounted radar and that’s why we were shooting down German bombers at night. So, our Secret Intelligence Service wanted to hide our new technology and put it in the newspapers that we were feeding our pilots carrots, because we discovered that if you give them lots of carrots, they get better night vision. And 80 years later, after the war, we still all believe it, don’t we? So, disinformation is all around us. But we’ve never had this asymmetry in which there can be so much of it pumped into our information ecosystem. And of course, for a young person–I’m a new father, very new father, I’ve got a 11.5 week old daughter at home, and I think really hard about what it is that I want to teach her because, of course, curating the information I give to her, helping her to learn in an effective way. You know, the science of pedagogy is really complex, and it’s something we think about really hard as parents. Now imagine immersing your child into a space in which nonsense and truth intermingle seamlessly, without any real way of differentiating between the two. Can anyone imagine why that’s a real problem for us? And the second problem, of course, is that we call these systems like A.I., artificially intelligent, when of course, quite often the stuff that they put out is nonsensical–is untrue. There’s almost a problem with the word intelligence, isn’t there? I mean, it’s not intelligent if it’s spouting nonsense. And I’m going to show you some ways in which it is nonsense. We do tests. We look at the ways in which these models, which are trained on large amounts of unfiltered, often biased and inaccurate data. I mean, there are some social media platforms developing their own A.I. tools, which are based on what people post on social media. That’s not a great way to train an intelligent tool, because a lot of what is posted on social media isn’t really intelligent now, is it? And especially when A.I.s then integrate into search engines. So, when you search for information, you know “Google” is a verb. It’s a company, sure. But we “Google” information, don’t we? We “Google” a question, you know? “What’s the best diaper for my baby?” Google it. But when the first result is an A.I.-generated answer and that can actually be based on poor data, that can be really problematic. And we test the platforms to see how they work. We look at things like, we’ve done studies looking at things like, how popular A.I. tools respond to a question by a child on how they can lose weight. One of them said, “Chewing and spitting is an effective way to do it.” Another one said, “Camouflaging food and everyday items to hide your uneaten food is a good way to fool your parents and eat fewer calories.” One of the–this is honest, honestly true–one of them said, “Swallow a tapeworm egg and let it grow inside you to lose weight.” Artificially intelligent. We looked at things like Grok, which is the A.I. platform integrated into X. We asked it 60 questions about election disinformation. It didn’t reject any of them in terms of generating new disinformation. So, it generated everything from images of election ballots being burnt and stolen. It generated hateful images of Jews, Muslims, black people and LGBTQ+ people. And we found that bad actors are using tools to create and spread political disinformation as well. There are people out there that are so willing to mislead their fellow citizens and our children, that they will use these tools to produce disinformation at scale. So, we have a problem, right? We have these systems which we tell people are intelligent, we tell our kids are artificially intelligent. They are often based on poor data sets. They often generate without thinking like a normal human being would. I mean, if a 12 year old girl came up to me and said, “How can I lose weight?” I wouldn’t tell her to swallow a tapeworm egg. I don’t think anyone would. And yet we tell people, and yet these systems are considered the cutting edge of human endeavor and understanding. So it’s in part the way that we present these technologies and the way that they’ve been built. The commercial models that underpin them. And what’s the impact? We did a study a few years ago on– oh no, gosh, last year– everything just seems–time dilates when you’re working hard. We did a study last year, asking young people about conspiracy theories. So, we asked adults and kids, a thousand adults, a thousand kids–and we did it in both the US and the UK, my home country– I live in the U.S. though–about conspiracy theories, and what we found was that the most conspiratorial cohort, age cohort were 14 to 17 year olds, the first generation raised on short form video platforms. That’s really disturbing. For example, antisemitic conspiracy theories, over half of young people who used social media for four plus hours a day believed that Jews were malevolent and had control of our politics and our economy. To a European, those are scary numbers. They lead to bad things, those sorts of numbers. So what do we need? Well, often the discussion around the types of policies we want doesn’t really matter if it’s A.I. or social media that we’re talking about. This is a two-pronged issue. You’ve got an industry and–that we need to ask, “How are the tools being developed, tested?” How are they managing risk? How do they work to ensure that their tools are not being used to generate harmful content? The social media platforms, we’re asking them how they’re collaborating to ensure that dissemination of A.I. generated content–harmful A.I. generated content–is not spread on their platforms, and how they enforce their own rules. Right? They’ve got rules in these platforms. We all sign up to them when we join the platform, we agree that we won’t spread disinformation, we won’t be hateful to other people. How are they enforcing those rules? And, we have a program that we call, “The Star Framework”. That’s a framework that I’ve been advocating around the world to lawmakers for many years now. It says that safety is created by transparency, first of all, better, not less speech, more speech. Tell us how your platforms work. Give us more insight, so we are informed when we then hold you accountable. The “A” in star. We ask you tough questions and we expect real answers. And then if you are causing harm, we should be able to hold you responsible. Look, I’ve dealt with parents who’ve–whose kids have…kids who died, because of self-harm content online that was flooded to them by algorithms. Which is really disturbing when you deal with them, especially as a parent, it gets you in the gut. And they can’t hold those companies responsible, despite the fact that their kids were flooded with thousands of images that may have induced harm, may even induced them to hurt themselves, to cut themselves or kill themselves, or to become, to suffer eating disorders or other psychological trauma. And, there’s no way of holding them accountable at the moment. So, how can we hold them accountable in future? But there’s a million other things they could do as well. You know, these platforms aren’t designed, as I said earlier, with safety in mind from the outset. So, incorporating safety features such as curating and vetting learning materials before the A.I. analyzes them. To remove harmful, misleading or hateful content. Ensuring that subject matter experts are employed and consulted in developing training material for A.I. I know there’s no such thing as a racist algorithm, but there are algorithms that are programmed by people that aren’t thinking cleverly, about whether or not the outputs may be damaging or discriminatory. If you want to train an A.I. to act like an expert, it should be trained by an expert of that field. And constraints on the models’ outputs should be put in place to ensure that they stop anything being generated that is harmful or misleading or dangerous. Implement error correction mechanisms to fix issues when they arise. And you know, I’m saying all these things and you’re thinking, crumbs. Well, these are obvious, Imran. And you’re right, they are. But they’re not being done right now. And that’s why CCDH and the others, you know, I recognize many of the the folks who are speaking today, are working so hard on helping to bring to light, bringing transparency, shining sunlight on the way these systems work, so that we we get them to realize that most people would be annoyed if they realized this is that–they haven’t taken due care with platforms that that have such an enormous impact on our society. I mean, there is no real transparency when it comes to these models. There is no transparency on how their safety works, and that really is what we should all be working towards because these technologies, like it or not, are here. They’re here to stay, and we want to make sure that they serve humanity in the most positive way possible, because they have such enormous potential. Thank you, and I’m happy to take any questions.

[Diana Graber]: Well said, well said. So thank you so much. We’re a little short on time, but I’m gonna ask you a really quick follow up to that. So many kids today have YouTube channels and TikTok channels. What advice would you give to those parents?

[Imran Ahmed]: Well–I think, it really comes down to each individual child in one respect. And some children are able to discern between the nonsense they’re seeing, some of them have a robustness and resilience. Some of the advice being given earlier today is incredibly, incredibly useful. But, let me tell you a dirty secret of Silicon Valley–I’m actually in Silicon Valley at the moment–is that most of the people here would never let their kids, when they’re young, spend time on those platforms. And that should be an indication itself as to whether or not they think they’re safe for your kids and my kids.

[Diana Garber]: Great advice. Well, thank you so much, appreciate it. Alright, so we’re going to move on to our final presenter today, Dr. Joel Breakstone, welcome. He is the Executive Director of the Digital Inquiry Group, DIG, a new education nonprofit. He directed the Stanford History Education Group for ten years before spinning it out of Stanford to become DIG. He leads its efforts to research, develop and disseminate free curriculum and assessments. This work has been featured in The New York Times, NPR, The Wall Street Journal. He received a Ph.D. from Stanford Graduate School of Education and previously taught high school history in Vermont. Welcome.

[Dr. Joel Breakstone]: Thank you so much, Diana, and thanks everyone for joining us today. It’s a real privilege to be able to spend some time with you. As Diana noted, our organization and the Digital Inquiry Group is new, but our work in this area is not. We previously, for nearly two decades, were based at Stanford University, focused on initially, as the name of our organization suggested, creating history curriculum and materials. But, for the last ten years focusing as well on how to support young people to be better consumers of online information. And, certainly that is a crucial issue as all of the presenters so far have identified. Young people are living online lives, and we need to support them in making sense of the overwhelming amount of information that streams across their devices. And that issue is particularly important because sometimes, because young people have grown up with digital devices, there is a tendency to assume that they also know how to make sense of the information that comes across their screens. During the last election cycle, Politico published a piece in which they said, to be sure, Gen Z doesn’t need lessons on how to use the internet. They aren’t falling for the same fake news stories that may have duped their parents in 2016. And my colleagues and I were pretty struck by that statement, because as that came out, we were in the midst of a national study in which we were asking students all across the United States to evaluate real online sources–in total, it was more than 3,000 students, a sample that represents the population of high school students in the United States. And one of the tasks that we asked students to complete was to examine a social media video, which claimed to show voter fraud in a previous American election. And when students were asked to complete that task, they had access to a live internet connection. And we told students that they could do whatever they wanted, to figure out whether or not to trust that source and encourage them to do what they normally would do. And if students had just opened up a new tab in their internet browser and searched for information about the video, they could have almost immediately found out that the video was entirely false. It was not really showing people stuffing ballots during an election in the United States. There’s a Snopes article debunking it, and even more persuasively, there was a BBC article that explained that all of the footage that claimed to show voter fraud in the United States actually depicted voter fraud in Russia, and it came from an investigation that the BBC had done into voting irregularities in Russia. And so it had nothing to do with what the social media post claimed that it did. Now, our digital natives, young people who grew up with digital devices and sometimes people believe are particularly good at using them to find credible information, how many of them were able to identify the Russian source of that video? A grand total of three. Less than one tenth of 1%, identified the Russian source. That kind of result should give pause to anybody who is concerned about the health of our democracy. We want people to be making decisions based on credible and reliable information. And, if young people are so easily misled that that should give us all pause. And, so the question then is how can we help students do better? What are more effective strategies for making sense of online information? And to try to answer that question, we set out to identify expertise and we asked three different groups of people to evaluate online sources. And we thought each one of them might be skilled at this. The first were Stanford students, young people living in the heart of Silicon Valley who were online all the time. Second group were history professors, folks who evaluate information for a living. And then the third group were fact checkers, folks from the nation’s leading news outlets who are tasked with ensuring the quality of information that is presented in those outlets. And what we did was to observe people as they evaluated unfamiliar sources. And we recorded their screens and asked them to think out loud as they went through the process. And there were some really striking differences across these groups. The students and the historians tended to initially just stay on a source and read it carefully; one of the tasks focused on an article from the website: minimumwage.com. And in general, the historians and the students read it very carefully. And then maybe sometimes did some additional searching after a while. In contrast, the fact checkers almost immediately left the task–the website, excuse me–to figure out who’s behind this source. They found out that it was from the Employment Policies Institute, and then they did some searching about the Employment Policies Institute. And again, within a matter of 30 seconds or a minute, they were able to figure out that the Employment Policies Institute is actually a front group for Washington, D.C. Public Relations Firm that is working on the behalf of the food and beverage industry. Folks who have a vested interest in keeping minimum wages low. All of the fact checkers figured out that information. That chart shows how quickly the fact checkers just figured out that the minimumwage.com was run by Employment Policies Institute. They did it in less than a minute, it took both the historians in orange and the students in turquoise much longer. And in contrast, all of the fact checkers figured out that this was a front group. Only 60% of the historians figured that out, and only 40% of the students did. And it took them, on average, much longer. And what was the thing that distinguished the fact checkers more than anything else? They understood something that seems a little bit counterintuitive, but that’s that on the Internet, in order to understand a source, you want to leave it. The fact checkers were very different than the students and the historians, who are smart people. This is not an effort to denigrate them, but they approached these unfamiliar sources with strategies that work well in a physical world. They read the sources carefully and read up and down on a single page. And when they did do searches, they often clicked randomly. And in contrast, the fact checkers read laterally, they almost immediately left an unfamiliar source and turned to the broader Internet to figure out what is this source? Is it what I think it is? Can I trust it? And in the process, they ignored a great deal of information. They didn’t read the source carefully, because they understood that we are living in an attention economy and that there are countless sources competing for attention. And what we need to determine is whether or not a source is worthy of our attention before we give our time and attention to it. And by turning to the broader web and figuring out what is this thing first, they were able to better come to better conclusions about the credibility of particular sources. And so based on this research with professional fact checkers, we have set out to create curriculum materials. We’re not trying to create an army of mini fact checkers. Instead, we want to distill down some of the approaches that the fact checkers used and teach them to students. And really, at the heart of what we’re doing is a disposition towards information, to ask who’s behind this information? What is this thing? Do I know what I’m looking at? Is it a source of information that I want to use? And what if I can’t figure it out, what it is, maybe I should just go somewhere else? And what we’ve done is go out to test these materials in a whole range of settings. In our most robust evaluation to date, we gave these materials to students across an entire urban school district. And we saw very clear growth after just six hours of instruction, less time than a typical teenager spends online in a single day. And there’s a growing body of research to prove that this approach can work. And there are now more than 14 studies with more than 10,000 participants, both by our research group, and other research groups all across the world, showing that these are strategies that we can teach people. And that by doing so, we can lead to better outcomes that people can become more discerning consumers of online information. And now, going forward with schools, we think that it’s critically important to identify ways to integrate this kind of instruction into classrooms. So, if we’re in a history classroom, what are the kind of Instagram posts or TikTok videos that we would like young people to investigate? To understand how the sources where we know they’re spending their time also are implicated in core school subjects, and then give students opportunities to practice evaluating these sources in history class, and in science class and in civics class, so that slowly but surely, students begin to build the capacity to better reason about the types of sources that that they are seeing all the time. We make our materials freely available on our website, which is called cor.inquirygroup.org. The lesson plans are Google Docs and we also have Google Forms for the assessments and a whole bank of videos as well, that are designed both for students to watch as well as to the educators. We created a whole video series with John Green and his team at Crash Course that are absolutely designed for the consumption by young people. And, so with that, thank you so much for your time and attention. And, Diana, I’d love to hear questions from the collection that have come in.

[Diana Garber]: Well, thank you, Joel. Whenever I see your work, I’m reminded how important it is for kids to get these lessons. You guys do incredible work and have incredible lessons. But, there’s a question I’m going to ask the second part of it, because it’s a good one. Is there someone asking Congress or schools to mandate these kind of lessons?

[Dr. Joel Breakstone]: Yeah, it’s an excellent question. So all across the country, there are efforts to enact legislation calling for media literacy instruction. Now that looks really different in different places, but it’s certainly you–we are seeing that shift with an organization called Media Literacy Now has a nice tracker of the kind of legislation that has been passed. And so, there certainly is movement in that regard. I think that the key issue that we really need to attend to as this type of legislation comes online is what are the supports that are being created to help educators actually make these–this legislation a reality? Because in many cases, they are mandates without any support. And so, teachers are being left to hold the bag. And further, we need to make sure that the kinds of resources that we are providing to teachers are ready-to-use and evidence-based, that the stakes are too high for us to be having untested resources be put into classrooms. And so, there’s a twofold need. One is to make sure that we are adequately supporting educators, both with curriculum materials that are ready to use and professional learning to assist them in using those materials. And then second, evidence-based for those resources, so that we’re not going into this important realm blind about the efficacy of those materials.

[Diana Garber]: Yeah. I’m really glad you mentioned Media Literacy Now, I actually took their chart out of my presentation because there wasn’t time. But if anyone wants to go to their website, they have an excellent map of the United States to show where the legislation stands in each state. So, I really encourage people to go there. Alright, so I think we’re ready to open it up now. If I could welcome back all the presenters. We’ve got a whole bunch of questions to get through. So let’s see, we’ve got a little time here to get that done. Gosh, I don’t know who to start with this one. Here is it–here’s a real good one. How do you deal with misinformation when there is a tiny kernel of truth in it? An example, there are a lot of pesticides in apples, part of the plants known as the dirty dozen. But on the whole, they’re healthy. How do you teach you to look further? I think that this might be a question for you, Rakoen, since you pointed out this resource.

[Dr. Rakoen Maertens]: I knew that apple was going to get back at me. Yeah. I was thinking about pesticides and GMOs and so on. You know, it can go a long way. Yeah, I mean, in many of the inoculation trainings we do, I mean, we often, as psychologists, we get accused of like, oh, you know, you’re trying to spread a certain type of truth or inoculate people against certain viewpoints of life. But actually much of the work we do is making people recognize these certain elements that might be manipulated, might be untrue, partially. And not necessarily to make them judge, this is true or false, but rather, this is a warning sign and do some extra research. Right? So I think, I mean, nothing is 100% true. I think many things are, but many things of the information online have some mix of both and that’s the complexity of reality. And you cannot, like, put it down to just true or false. So yeah, any–I think any good training also has to incorporate this, this confidence level. Right? And it’s good to have some warning signs and okay, apples or if you know that apples are healthy and then someone tries to convince you otherwise, be open minded to explore it. And then if you compare the evidence, then you’ll probably see that it’s not that bad at all.

[Diana Garber]: Great advice. And, you know, we talked a little bit about deepfakes, but I don’t think we talked about them enough because that’s a huge thing that we’re dealing with today, especially kids. And I was surprised when I was researching that to find out how many kids actually cannot discern if something is a deepfake. We include that in our curriculum. We give them little, you know, checklists of things to look for. But, who would like to address that here? Like, what would you tell maybe a teenager to look for, to identify something as a deepfake?

[Dr. Joel Breakstone]: I can certainly take that on. I would urge students not to focus on looking at the source carefully. I think that we are doing students a disservice if we give them a belief that they’re going to identify most deep fakes. The technology is improving so quickly that students could be led astray. Certainly, there are types of things that currently deepfakes generators are not as good at. But in general, like the best approach is to say: If I’m not sure if this thing is real, I should go somewhere else, and try to get a sense of other information about it. If there is a deepfake claiming to show a politician doing something very outrageous, like I should go look elsewhere. Our eyes deceive us online and the power of these tools is incredible. And so, there–we don’t want to give students a false sense of security that they’re always going to be able to identify a problematic source. And so instead, better to think about, you know, if I’m not sure about something, what are trusted sources that I could look to? And more broadly, if I’m not sure, I should probably just leave it. Right? You know, as a default if I’m not sure about the provenance of a source, then, you know, I look elsewhere. And if I can’t figure it out? Yeah, I probably don’t want to share that thing.

[Diana Garber]: Okay, this is my question. Because, we keep talking about trusted sources. But, I’m finding that kids are starting to not trust anything. And so, how do we help them understand what can be trusted? And I’m going to let whoever feels comfortable take this and answer it because it’s something that I’m grappling with myself.

[Imran Ahmed]: I mean, I think one of the problems here is that we have created an information ecosystem in which truth and lies intermingle seamlessly. And so, it is unsurprising that kids find it difficult to work out what is true or not, and have no real sense of how they can work out what’s true or not. And that’s the reality. And it’s not just for kids, right? So there’s that–the state of being uncertain about how to work out what the truth is. It’s called “epistemic anxiety,” and it’s really correlated with conspiracist thinking as well. Because in the chaos of not knowing what’s true or not, being told countervailing things, and not really being able to discern between the two–you end up just looking for something which is solid and seemingly satisfying, a big answer, but actually isn’t. And that leads to, eventually to apathy. And apathy is a really, really–it’s a dangerous…I always say the most dangerous thing for democracy is apathy. It leads to, it’s a tool of control used by the Soviet Union, as much as it is by people who spread lies online that you end up going for a, you know, whatever feels, whatever conspiracy theory they’re trying to promulgate. I’d make the point as well that, you know, one of the issues that we have is that, we have been telling people to look out for things like look for three hands or six fingers, and that’s not–it’s not a healthy thing to do. Because, what about when the A.I. generator doesn’t generate six fingers? Then people think, “Well, that must be real then because A.I. generate six fingers or three arms.” and that’s not good. You know, we know some of the examples of how generative AI has been rubbish. But what we don’t know is all the times when it’s been convincing; you don’t know the denominator. You don’t know what that tiny, if that’s a tiny proportion, that’s, you know, sort of predictably bad generation, but you don’t know how many times it’s not made that mistake. And I think that’s one of the problems that we’re dealing with, is that we are dealing with utter chaos. You know, you asked earlier on about truth. You know, how a conspiracy theory may have an element of truth to it. Every conspiracy theory at its kernel has a lie. But truth can be used to mask the lie. And again, when we’re talking about complex cognitive problems that adults have difficulty discerning between, and I bet you any money–out of I’m probably, I’ve got the least degrees of anyone on this panel. I’m actually a little bit embarrassed, I feel slightly humbled by the group I’m in. But I bet you any money every single one of us believes something that’s absolute nonsense. It is difficult to discern between them, and especially so for kids when they’re going through a process of trying to understand the world around them, building the frameworks through which they interpret the world around them. And of course, we have now turned that into gray mush.

[Diana Garber]: Yeah, I’m trying to think about what I’m believing that may not be true here, so I’ll have to ponder that later on.

[Dr. Rakoen Maertens]: If I can add something to that…

[Imran Ahmed]: I believe no one can tell that I’m slightly losing my hair at the back, that’s what I believe.

[Dr. Rakoen Maertens]: Can I speak to that, Diana? 

[Diana Garber]: Yes, please.

[Dr. Rakoen Maertens]: Yeah. I think, I mean, it’s interesting, so you mentioned the degrees and so on, right? So I think, another mistake that sometimes people with all the other credentials make is they– they say some statements that they have a lot of confidence for with a lot of confidence, but some other statements where they don’t have a lot of evidence for sometimes also a lot of confidence. And, I think it’s if you build a relationship with someone and you–you are often confident in your statements and you don’t express any intellectual humility, but then are terribly wrong, you can damage your trust in relationships. Right? So there’s also some interesting research on like, okay, what do we do during COVID and so on. How did we communicate? And, those channels…who said, like, you know, “the vaccine, it might be some problems with it, but it will be fine” versus channels that said, “No, it’s impossible to die from the vaccine.” Right?  And then there were blood clots. But, so it’s interesting to take into account: How do you–even if you have expertise, how do you communicate that in a way that means that you carry long-term trust and not just short-term. Right? And, I mean, there’s more from a communication standpoint that doesn’t help, but for the children necessarily to choose who to trust. But, yeah.

[Diana Garber]: Yeah. And I think you bring up a good point. Just real quick, and I’ll go to you Joel. But, I think we learn so much from just listening to the kids. Like, they’re pretty smart about this stuff. And when you’re talking about A.I., that can get so wonky, but it’s really simple to teach a child how A.I. works. And if they start understanding the mechanics of it, they’re going to be a little more thoughtful, I think, on getting results from A.I. or when they see something that’s A.I. generated. So I don’t know. Joel, go ahead. I interrupted you.

[Dr. Joel Breakstone]: I, you know–circling back, no, not at all. I apologize for interrupting you. Your point about, you know, what are the what are trusted sources? I think is a key part of all of this is to make sure that students are seeing instances where they can find quality information, that there’s a danger that instruction in this realm leads students to believe that nothing can be trusted. Right? To Imran’s point. Right? And that there, and that that is a terrible outcome, right? In the end, what we want is students to understand the incredible power that is at their fingertips, right? That they have the internet with this incredible resource of information. But how do we use it well? And so both making sure that we provide opportunities for students to see the powerful ways in which they can become better informed using online sources, and then also not to fall into this dichotomy of sources are real or fake, but rather so much of stuff is in the mucky middle, right? And to understand that that source really what I’m trying to think about is, you know, what is this source? Right? And what is the perspective of the source? And how might that perspective influence the content of the message that they’re delivering? And, so we often talk about having students think about the perspective and the authority. Right? Like what is the perspective of it–of a source? How might that influence what they’re saying? And then also like what’s the authority of the source to speak about a particular topic? Are they well suited to give me information that I could trust about this topic? And that those two things are not, dichotomous, right? It’s, you know, they exist on a scale and that we should think about them and reason about them in each instance.

[Diana Garber]: I loved that perspective, that’s so important. And yes, Andrew, I was going to go to you next, jump in. But I also wanted you to talk a little bit about developmentally appropriate ways to address all this with students, but it sounds like you had something else you wanted to put out there.

[Dr. Andrew Shtulman]: Yeah, just on the topic of the pessimism that’s generated by the epistemic chaos that’s out there. I think, as a news consumer, it’s natural to feel pessimistic. But there’s also an educational opportunity here for teachers at all levels, K through 12, even college, to get people to think more deeply about knowledge. How knowledge is generated, how it’s evaluated. There’s these classic studies in cognitive and developmental psychology looking at people’s understanding of knowledge. And the finding is that it takes graduate level education before you really understand what knowledge is. You know, from sort of like the most sophisticated, philosophical perspectives. Most people just have been absorbing things that have been given to them, which was okay in a time when there were three news channels and everything you saw on those channels was well vetted. But now that–that we live in this chaos, that work has shifted from the people producing news, now onto the news consumer. And it’s, I guess we have to, train people to think more deeply about knowledge at a younger age. But we don’t have to also, you know, be totally pessimistic about it. It could be good to be thinking more deeply about what makes something a verified fact.

[Diana Garber]: So can you talk a little bit about how you would introduce this at different developmental stages? I mean, obviously this is really different for a six year old than it is for a 17 year old.

[Dr. Andrew Shtulman]: Sure. I mean, you know, just thinking in terms of like the literature on epistemic understanding, The–you know, there’s some kinds of epistemic distinctions that are pretty easy to introduce even into young children, like a difference between a fact and an opinion, or the difference between, a statement of fact and the statement of values. And those are things that like, even in elementary school, children can, you know, grapple with. Independent of, say, the internet. I think there’s probably like a whole suite of instructional tips and techniques that need to be introduced when dealing with specific kinds of channels of media, the internet, or social media platforms, or whatever. But there’s also potentially lessons in the production and evaluation of knowledge that could be implemented. I mean, typically that’s not even addressed at all in K-through-12 education.

[Diana Garber]: Yeah. Okay. And I think this next question is for you, Joel. How do you teach youth to learn from the news or fact check when so much is behind a paywall? Teaching them to search would either lead to dead ends, and the only results that will open are often from far right think tanks.

[Dr. Joel Breakstone]: Yeah, it’s a really important point. I also think this is a key part of having students reason about A.I. chat bots also, is to help students understand–what is the information that A.I. models have been trained on, and that often things that are behind paywalls have not been included in those training sets. But yeah, I think that, certainly part of what we want students to do is to think about what are the sources that may be more likely to yield accurate information. And it comes back to Andrew’s point, about how knowledge is produced. Right, is to think about what are the systems in place for the creation of that information? And sometimes, you know, you are going to run into paywalls. And that’s a real problem, right? And it’s a question about how journalism is funded in the United States and more broadly? But I also think that we need to take a step back from the notion that students have to come to an exact understanding of what is going on. That, you know, we don’t need students to be fact checkers where they find every last detail. Instead, we want students to be able to figure out quickly: what is this thing? Is it something that I can trust? And in some cases, you know, you don’t need to read the entire paywalled article. The, you know, the quick little blurb at the top will give you enough information. Or you can find a well sourced Wikipedia entry that gives you enough information to understand the source. And so part of this is, we need to be reflective of the unbelievably, overwhelming amount of information that is coming at all of us. Right? And there–it’s unrealistic to expect that students are going to spend a lot of time investigating any of these things. So what are the–what are the quickest ways that we can get a better sense of the thing that’s in front of us? That being said, you know, that that’s not a total answer to the problem of information being behind paywalls and, and more broadly, how do we ensure the continued access for all of us to better content? But, in terms of how to think about it, part of it is to say, you know, if you run into a paywall, what are other sources we could look for so we can figure out something quickly and not always have to rely on the incredibly long article. But rather, you know, even just the, you know, snippets in a search results page can often give us some clues about what is this thing that we’re looking at.

[Diana Garber]: You know, as you’re explaining this, I’m thinking how much kids actually love doing this, especially when they do it in groups because it’s almost investigative. They’re trying to solve a problem together. And they feel a sense of, you know, “Hey, I did it. I found the right thing”  So it’s just a real fun thing to teach as well as being so important. This next question is for you, Dr. Maertens. You mentioned actively open minded thinking is a skill, but I’m wondering if it’s more of a personality trait, much like the Big Five personality trait of openness?

[Dr. Rakoen Maertens]: Yeah, that’s a good question. So, the jury is still undecided, but I think from what I’ve seen so far, it’s a combination of all three of those things. It’s a personality trait, it’s a skill, but it’s also for a big part, a social norm in groups. So it’s really, some groups are much more open to new perspectives, right? And others really force you to get streamlined in one viewpoint, and don’t allow any deviation. So, then even if you’re high in open-minded thinking, but you’re in such a group, then it will be hard to express yourself. We actually also did some research to compare. For example, the Big Five personality, the questionnaire, the openness factor on there. And it doesn’t have a high correlation with the actively open-minded thinking. So it’s actually slightly different. So the Big Five personality one is more about being open to new experiences and to science and literature and so on. Well, actually open minded thinking is more really a way to process information and intellectual humility is part of it’s updating your opinion and really being able to really not just say that you’re listening to another perspective, but really consider it. But yeah, it’s still–the jury is still out. How much the personality, social norm or skill part of it has a role and which one is easiest to teach or train or change in society. Right? That’s–that. Yeah. To be continued.

[Diana Garber]: Great. Thank you. And, Imran, I’ve got one for you because we sort of, you alluded to this a little bit in your presentation, and someone has asked: Some people may recommend keeping kids off digital media, maybe forever. What’s your take on that? I believe kids are being avoided, not raised and left to the fake media. I don’t know what that part means, but what’s your thought about keeping kids off a digital media?

[Imran Ahmed]: I mean, personally, I’m not an expert on this and I only speak as someone who studies harms. And that gives me a really, you know, cynical lens on the world. But I also know how powerful these technologies are. They can be a force multiplier for, you know, the learning process. They can be incredible in engaging their imagination. I remember, I’m 46 years old. When I was 4 or 5 years old, my dad brought home a BBC micro B computer with 32 K RAM. So anyone that knows these things, that’s pretty cool, but very old and very slow. And I learned to program BASIC, and that just sparked imagination like you wouldn’t believe. I remember sitting there for hours trying to understand logic and programming, and that’s a phenomenal gift to a child. So I don’t want kids to be taken away from those. My job as someone who runs an advocacy organization is to make sure that the platforms are safer. So that all of our kids can benefit from the enormous benefits of social media, this enormous library of information out there in the world, the power of artificial intelligence to be syncretic and bringing together information and synthesizing it into new forms and new ways of answering the questions that our kids have. What I want is safer A.I., better social media, so that we can use it in the right way. But right now, we don’t have that. And God willing, in a few years, you know, I think there are there are jurisdictions around the world like the EU and the UK, Canada and others, which are working really hard to actually establish de minimis standards for these platforms and to ensure they’re more transparent, accountable, and responsible in the way they’re designed. So, you know, watching brief guys, let’s see what changes over the next few years. And I think that there is increased awareness and will to act from lawmakers around the world.

[Diana Garber]: And what I would say to that question is good luck to those parents trying to keep their kids off. Kids need these tools, they need all of this for their futures. We need to educate them on how to be more discerning and how to use them well and safely. So, any other last words on that? All right. Well, I think we’ve done it. Kris, if you want to come in, I think we’ve wrapped it up for you. Thank you so much to all the wonderful presenters. I learned a lot today, I’m going to for sure be watching the recording to get a lot of great resources. So, thank you.

[Kris Perry]: Thank you, Diana, and the entire panel for this thoughtful discussion on how children perceive and manage information they receive online, and how adults can help them think critically and navigate digital challenges. If you found today’s webinar insightful, help us keep the conversation going. Your donation ensures we can contribute and continue bringing you expert advice through our Ask the Experts webinars. It’s easy to give. Just scan the QR code on your screen, click the link in the chat, or visit our website at ChildrenandScreens.org. Together, we can make a difference.