Social media platforms are feeding youth harmful content—fast. From hate speech to pro-eating disorder posts and disinformation, powerful algorithms push dangerous content straight to children. But why? And what can be done to stop it? On this episode of Screen Deep, host Kris Perry talks with Imran Ahmed, founder and CEO of the Center for Countering Digital Hate, about the hidden dangers of social media algorithms, the risks youth face on various popular platforms, and the urgent need for transparency and accountability in digital spaces.
Listen on Platforms
About Imran Ahmed
Imran Ahmed is the founder and CEO of the Center for Countering Digital Hate US/UK. He is an authority on social and psychological malignancies on social media, such as identity-based hate, extremism, disinformation, and conspiracy theories. He regularly appears in the media and in documentaries as an expert in how bad actors use digital spaces to harm others and benefit themselves, as well as how and why bad platforms allow them to do so. He advises politicians around the world on policy and legislation. Imran was inspired to start the Center after seeing the rise of antisemitism on the left in the United Kingdom and the murder of his colleague, Jo Cox MP, by a white supremacist, who had been radicalized in part online, during the EU Referendum in 2016. He holds an MA in Social and Political Sciences from the University of Cambridge. Imran lives in Washington DC.
In this episode, you’ll learn:
- How quickly social media algorithms deliver harmful content to children, such as pro-eating disorder and drug content.
- Why platforms are aware of this issue—but choose not to do anything about it.
- Why transparency in social media algorithms is essential for protecting children from harmful content.
- What kind of advocacy is needed to drive stronger protections—and how recent advancements in European policies may provide a model.
- The STAR framework for social media reform promotes Safety by design, Transparency, Accountability and economic Responsibility.
Studies mentioned in this episode, in order mentioned:
Center for Countering Digital Hate (2024). Deadly by Design: TikTok pushes harmful content promoting eating disorders and self-harm into young users’ feeds. https://counterhate.com/research/deadly-by-design/
Center for Countering Digital Hate (2024). YouTube’s Anorexia Algorithm: How YouTube recommends eating disorders videos to young girls. https://counterhate.com/research/youtube-anorexia-algorithm/
Center for Countering Digital Hate (2024). TikTok’s Toxic Trade: How TikTok promotes dangerous and potentially illegal steroid-like drugs to teens. https://counterhate.com/research/tiktoks-toxic-trade/
Center for Countering Digital Hate (2024). Rated not helpful: How X’s Community Notes system falls short on misleading election claims. https://counterhate.com/research/rated-not-helpful-x-community-notes/
Center for Countering Digital Hate (2024). Building a Safe and Accountable Internet: CCDH’s Refreshed STAR Framework. https://counterhate.com/research/star-framework-to-build-a-safe-and-accountable-internet/
Center for Countering Digital Hate (2024). AI and Eating Disorders. How Generative AI Enables and Promotes Harmful Eating Disorder Content. https://counterhate.com/research/ai-tools-and-eating-disorders/
Center for Countering Digital Hate (2024). Fake Image Factories: How AI image generators threaten election integrity and democracy. https://counterhate.com/research/fake-image-factories/
[Kris Perry]: Hello and welcome to the Screen Deep podcast where we go on deep dives with experts in the field to decode young brains and behavior in a digital world. I’m Kris Perry, Executive Director of Children and Screens and the host of Screen Deep. Today we’re going behind the curtain of social media platforms, digging into important online safety topics, including AI, how algorithms deliver content to children, and why we might want to pay close attention to what that content is.
I’ll be talking with Imran Ahmed, the founder and CEO of the Center for Countering Digital Hate. CCDH is a nonprofit dedicated to containing the spread of online hate and disinformation through research, public campaigns, and policy advocacy. Welcome to Screen Deep, Imran. I’m looking forward to this timely and particularly crucial conversation.
Most listeners have likely heard about social media algorithms in AI, but may not know on a practical level how these technologies underpin how content is delivered, which causes a wide range of impacts on children and adolescents. For example, your organization has published a few reports that describe how quickly some social media platforms are using algorithms to target undesirable content to children. Can you tell us about these reports and what people need to know about today’s algorithms and what they are delivering to children and adolescents?
[Imran Ahmed]: Well, look, thank you, first of all, for having me, and I’m really happy to sort of go into this in a bit of detail for your listeners because it is both really complex. There’s some really complex maths involved, but fundamentally it’s very simple. There’s three A’s that you need to know about.
So A number one is “advertising” and social media platforms make all of their money from advertising. And advertising is—the way that they make money is by keeping us on the platform for as long as possible. So, you know, the advertising revenues are the number of people multiplied by the length of time that we spend there, because every few posts they can send us an ad and they make a little bit of money.
The second thing is that social media platforms, they have so much content, it isn’t possible for anyone to watch all of it.
Let me give you one example of TikTok. Every single day, 13 years worth of content is uploaded to that platform. Every single day. And so it isn’t possible at all for anyone to watch all of it.
And so you need the third A, which is “algorithms.” And algorithms decide what content is going to be shown to lots of people and what content they’re just gonna, you know, shove aside. And that’s because fundamentally, they need to—to make maximum advertising revenues, their algorithms need to addict people for as long as possible. And so what they’ve worked out is, there is some content that leads people to stay on there for longer. That means that, for example, you know, if you’re trying to find facts about, say, healthcare, there is no interest for that platform in having you find those facts immediately. In fact, their economic interest is in having you find it difficult to find that fact. So what they wanna do is present you with two different versions or 10 different versions of the truth and let you work it out for yourself.
If you’re a kid, and those of us who have kids know this to be true, they present them with content that can be quite harmful, but actually makes them want to spend time on that platform. And keep in mind that these algorithms are not human beings. There’s no human being that is as malignant or as devoid of emotion, of compassion, as a mathematical equation can be. What they’ve discovered is that making kids feel bad about their bodies, making them be envious of other people’s lives, bodies, appearance, and everything else is actually a really good way to keep them addicted to that platform. And that’s precisely how the mathematics make these platforms work in practice.
[Kris Perry]: I’m glad you brought this up because I think most people can agree that children shouldn’t receive developmentally inappropriate content. Are the platforms doing anything to protect children from being not only exposed, but targeted for this type of content?
[Imran Ahmed]: Well, the platforms have—the way their algorithms work, and we know how their algorithms work in practice, and they know how their algorithms work in practice. And they recognize that, of course, sometimes the inhuman, the AI, the artificial, the robotic intelligence that underpins these platforms, can—it has worked out that content that makes kids feel bad about themselves is actually really addictive to them. So they’ve recognized this, they acknowledge it, and they have rules in place. They, you know, Google, so YouTube—which is owned by Google—TikTok, Instagram, Pinterest, all these platforms say that they do. What they say is in fact that they remove that kind of content so there’s no risk of even the algorithms that run these platforms accidentally, in their view, showing that content to our kids.
And one of our jobs at CCDH is testing whether or not the claims that platforms make are actually true. Because if they’re claiming to parents that, “We know there’s a problem, but we are doing things about that problem,” and in reality they’re not, that would be an unconscionable betrayal of the trust that parents put into these platforms, when they allow their kids to use them.
I’m going to give you two examples from recent research we’ve done. Last year we did a study on TikTok and we set up accounts as 13-year-old girls in four different countries. The United States, where I live and where I have my children, the United Kingdom, Australia, and Canada. What we found on that platform was that within 2.6 minutes of opening an account, on the “For You” feed, which is the main page for TikTok, you were receiving self-harm content. Within eight minutes, you’re receiving eating disorder content—every 39 seconds on average. The content that was being served up, it had hashtags that were sort of linking that content altogether to do with eating disorders and self harm. Those hashtags had received 13—the videos with those hashtags have received 13 billion views. So a real clear failure of those platforms to do something about content that can really harm kids.
Another example is YouTube. Now, YouTube has the most sort of advanced policies, the most consumer-and-kid-friendly policies. They say that they remove harmful eating disorder content. They say that they never recommend anything to do with eating disorders to kids. We did a study where we set up accounts—this is a very recent study in the last few weeks where we set up accounts as a 13-year-old girl. We then watched one video about disordered eating and then we had a look at what YouTube recommended “Up next,” so that algorithmic feed that tells you what you should watch after you’ve watched that one video. We found that 700 times out of a thousand of those recommendations—and these are completely new accounts, where YouTube knew nothing about the person who was watching it apart from it’s a 13-year-old girl, they’d watched one video about disordered eating—700 times out of a thousand, the platform itself recommended harmful eating disorder content, general eating disorder content, or self-harm content. So their algorithms even recognize that a kid is vulnerable and say, “Well kids that hate the way their bodies look, sometimes they want to harm themselves, too, and that’s content that will get them in–you know, that they–where they can’t find anywhere else and that if we feed them, they will look at more and more and more content, so we can serve them out.”
And that’s the dispiriting reality for parents of how these platforms work, that the platforms are serving up content algorithmically, we’ve proven that. Second, that they have rules because they recognize that this is a problem, but third, that they’re not following through on their purported rules with real world action that protects our kids from that kind of malignant content.
[Kris Perry]: I mean, it’s more than dispiriting, it’s disturbing. What should the platforms be doing on their own? What action should they take to rectify these situations? And, if they aren’t taking enough steps, what do we need to do to compel them to do better than this?
[Imran Ahmed]: Well Kris, you know, I believe that no human being could be so callous as to go up to a child, tell them that they’re ugly, and too fat, and that they should cut themselves or go on a starvation diet. I think if any human being did that, they would be subject to significant social and potentially legal sanctions, right? If a teacher did that, if someone on the street did it, you would never let your child near that person again. And yet we let our kids spend hundreds of minutes a day on these platforms.
So, why is that true? Well, economically it is in their interests, and we don’t have rules as a society. We haven’t done the work to place guidelines into law. We have our advertisers so, you know, on those videos that I was talking about in that YouTube study, Nike—Nike’s ads were appearing on that harmful eating sort of content. Advertisers are paying, essentially, to be advertised on that content.
And I think the fundamental problem that we have is the lack of transparency and accountability. The way that platforms work—CCDH has got data scientists, experts in how algorithms work. We can work out what is happening behind the scenes by looking at the outputs of those algorithms. And we do that every day. That’s what my team is tasked with doing. But we don’t actually know what’s in those algorithms. And so we’ve argued that we need to have real transparency of the algorithms and the ability for experts to interrogate the outputs of those algorithms to understand how they work in practice—not just the machine code, but being able to look at how they’ve worked for different people.
We’ve asked for transparency of their rules and also how they enforce their rules. So, another example taken from our own research: when we reported a hundred of those thousand recommendations to YouTube, and we gave them two weeks. So we pretended we were just normal users, we reported the bad content and we said, “Well, what will they do?” 80 times out of 100 they took no action, so they didn’t enforce their own rules. So, we want to have more transparency of how they apply their rules on those platforms, and we want more transparency on how they advertise on where adverts appear on those platforms. That’s not for us, that’s for the advertisers, because I can bet my bottom dollar if a marketing executive at Nike knew that their adverts were being placed on content that’s telling kids to go on a 500-calorie-a-day diet, which will eventually kill them, they would absolutely, you know, blow a gasket. And so we want to have transparency. And if you’ve got transparency, you can then have meaningful accountability.
And so what we want is—sometimes people that talk about social media platforms, we get told that we’re censors because we want to have— we want to stop people from speaking. We don’t. We want more informed discourse. We want to have transparency so we can have a real conversation with these people, and then, if they do wrong, to hold them accountable.
I’ve had the honor of becoming close to parents over the last few years who’ve lost their children to this kind of content. And I think that, when someone harms another human being, they should be asked to pay restitution, that there should be some mechanism for justice for those—for people who’ve been harmed. And that’s true of anything in the real world. So, if your Corn Flakes had poison in them, you’d be able to sue Kellogg’s, right? If someone did something bad to you, you can hold them accountable. There’s the criminal justice system, there’s the civil justice system.
But uniquely, social media platforms can’t be held accountable because of a very old law in the US called the Section 230 of the Communications Decency Act of 1996. And one of the things that we’re campaigning for is reform of that, so that when companies know there’s a problem, and when they don’t do anything about it, and when that really harms people, especially our kids, we should be able to hold them accountable. They should have to pay a price for it, and that way you create a disincentive for them being lazy and kind of not very good at doing the things that they know they should be doing.
[Kris Perry]: Your example of the TikTok eating disorder content being delivered to children is really quite shocking. Beyond eating disorder content, I also saw that CCDH has a report on how drug content is being delivered to youth on TikTok. Were these findings similar to the eating disorder report?
[Imran Ahmed]: Yeah, what we looked at there was how steroid-like drugs and anabolic steroids, peptides, these kinds of things are being recommended to young men on TikTok. And I think what was particularly disturbing about that was that some of the roots into that sort of content, that these are people who are spreading—telling young men they’re not good enough, that in order to be a real man, you have to have a physique like Captain America. That if you don’t want to have that kind of physique, that you’ll never have a relationship in your life. Like, you know, forget the fact that the truth is that just being a nice guy, and maybe financially stable, is really the basis of the first things you should aim for rather than, you know, being insanely muscled and using steroids to achieve that, and not telling them about any of the potential harms that can come with it. The truth is that taking anabolic steroids can so badly damage the heart tissue that it can lead to a significantly reduced lifespan.
And so, you know, that content was really disturbing. And look, I do think that we have a problem of the messages being sent to young men which make them feel terrible about themselves, give them terrible life lessons about how to make meaningful relationships, and tell them that they should hurt themselves, and put themselves at risk in order to achieve otherwise unachievable goals.
[Kris Perry]: We had Jason Nagata on a previous episode, who spoke at great length about both male and female body image, body dysphoria, eating disorders. And it was really quite alarming to hear about how AI and algorithms are capitalizing on that vulnerable stage of development, that early adolescence where children are particularly focused on upward social comparison, and for the companies and advertisers to take advantage of that vulnerable stage is really quite upsetting. Are there any settings that offer protections from this harmful algorithmic content on platforms such as YouTube, TikTok, and Instagram that parents can utilize to protect their children?
[Imran Ahmed]: There aren’t any settings you can trust, because one of the things that we do is measure the difference between the claims that platforms make and the reality. And consistently, we find that the claims they make are essentially spinned. So the only setting that keeps kids safe is not to give them access to those platforms.
The truth is that, like, algorithms aren’t evil or good. They don’t, algorithms don’t—don’t have emotions, they don’t have compassion, they aren’t human beings, but they are programmed by human beings. And even then, these mathematical algorithms can be so complex that the people that produce them can legitimately and credibly say, “We don’t quite understand what the impact is in the real world.” But once someone like CCDH has done the research, you expect them to take the action.
Now let me give you an example from TikTok and their reaction to it. Their reaction to it was to make it more difficult for people to do that kind of research again. I’ll give you another example. Platform X, when we did a study of hate speech on that platform, they sued us—not for being wrong, but for the act of doing research. I’ll give you another example from Google and YouTube. They reacted to our report by saying, “it was nonsense.” Now, I then got on a phone call with one of their executives, a real human being. And he said,”Oh, I’m so sorry that we did that. I don’t know why. It wasn’t me that signed off on it.” And I thought, “Well, someone did.”
And I’m afraid that the ways that these platforms have reacted to people highlighting problems in the past is indicative of the fact that they know there’s a problem, and they’re unwilling to do anything about it. And when you get to the point of knowing indifference, I think that should be where liability kicks in. So until we have a system in which they can be held liable when they harm your kids and there are real disincentives for them behaving this way, I do not think these platforms are safe for children, full stop.
[Kris Perry]: I appreciate the clarity of that response and it is a little hard to believe that some of the wealthiest companies on the planet don’t know what’s going on or how they can improve their safety.
Recently, Meta announced it was changing their content moderation policies and removing fact checkers in favor of Community Notes. What do you think the effect of this policy will be on harmful content or misinformation online?
[Imran Ahmed]: So, Community Notes is a system where people can add a comment to a post saying, “Here’s a fact check on it, and here’s the truth.” And it’s actually X under Elon Musk that introduced a Community Notes feature. It’s a good thing, right? We should try everything we can. But we wanted to do a study to look at how effective are Community Notes.
So, two problems we found—well, three problems we found with it. First of all, that—when you’ve got a community note on a piece of content, the actual content itself, before the community note can be put up, will have achieved 13 times as many views as the note. So that means that lots and lots of people get given the disinformation without a note, and then very few people comparatively get the note as well. That’s problem number one.
Problem number two is that 74% in our study of the Community Notes never got shown. So, someone actually went to the effort of writing a note but then, in order to get shown, it has to achieve a certain level of consensus, and that was never achieved, and so it was never shown.
And the third problem is the time that it takes between, you know, putting it up there and actually—and a note going up, so quite often, that achieving the consensus can take a really, really long time.
So in three ways it is—it is not fully effective. Now, it doesn’t mean that it is not partly effective—good, when it’s partly effective. But it should be part of a portfolio of responses. And the idea that you can get rid of everything else, and just have community notes, that’s really wrong-headed. It’s like saying, “You don’t have to eat healthy as long as you have a salad—I can eat a Big Mac for every meal, but as long as I have a bit of lettuce once a day.” And you’re like, “That’s not how it works, guys. Like, you actually need to think about this holistically.” And so I think that we have—you know, it is really dispiriting, really depressing to see someone as smart as Mark Zuckerberg treating us like we’re so stupid.
[Kris Perry]: I wonder if it’s that sentiment that propelled you to start this organization, CCDH, and get into this work. There’s so many angles we could take on this, but I would love to hear what brought you to this point in this work that you’re doing.
[Imran Ahmed]: It’s not a nice story. I’m 46, so I’m that generation that grew up without, you know, a lot of like—there was no Cold War when I was a kid, and nothing bad really happened in the world around me until 9/11.
And 9/11 was the day when I quit everything I was—I was actually a banker. I was 22 years old, it was the day before my 23rd birthday, and the next day I went to an Army recruitment center. And I said I wanted to join the Army and then they told me, “You should go and do something else, mate.” And this is in England. And I said, “Right, fine, I’m gonna learn politics and I’m gonna become a politician, so I can stop these kinds of things happening again.” I went to Cambridge, I studied politics and I worked in Parliament for 10 years.
And then, one day in 2016, my colleague, Jo Cox, who was a 41-year-old mother of two, was shot, stabbed, and beaten to death on the streets of her constituency in England by a far-right terrorist who’d been radicalised online. And I thought, “Someone has to do something about this.” Because there was just a growing phenomenon of conspiracism and hate in our society, and we could tell that it was being driven in online spaces. And at the time there was a rise in antisemitism on the politically progressive side in the UK. I thought, “Look, this is all happening simultaneously. Something bad’s happening.”
And so I went to the platforms. I started studying how the platforms worked, how the algorithms worked, why their enforcement was failing. For three years, I spent talking to them until I realized that they already knew, and they didn’t care.
And so I launched CCDH as a sort of cry of frustration in 2019 to try and bring more awareness to the problem and to try and get lawmakers, parents, you know, everyone in society to sort of stand up and do something about it. And five years later, I can say that I’m really proud that, you know, I was the first witness to give evidence to the UK Parliament for its new Online Safety Act. I’ve given evidence in Brussels for the new Digital Services Act. I’ve given evidence in Congress to the—where I live in the United States to push for legislation in the US. We’ve backed things like the Kids Online Safety Act, which your listeners may have heard of, and that wasn’t passed, but, you know, we have helped to bring awareness to this issue and drive regulatory action, but we’ve also driven advertiser action. And so we work really closely with everyone to try and get every organ of society to pitch in, to try and help us to persuade social media companies to actually do the right thing, something they know is right, but they fail to do because it’s not in their economic interests.
[Kris Perry]: Thank you for sharing that personal story. That was extremely moving and it also reminded me of your earlier comment about the work you’re doing with parents whose children have been negatively impacted by the platforms because this has grown in magnitude in these five years since you’ve taken on this role. Thank you very much for the work you’re doing.
I know your organization has developed what it calls the STAR framework, that was used to help develop policy in the UK for internet safety. Can you tell us about this and how it might be used to start improving online safety for youth in the United States?
[Imran Ahmed]: So the STAR framework is our way of ensuring that freedom of speech is respected but, at the same time, that we have a richer and more informed dialogue with social media companies about how they work and that when they are—when they cause harm that we can hold them responsible. So it says, “Safety by design is created by transparency, accountability and responsibility, economic responsibility.” And that, you know, sometimes I get told that I’m pro-censorship. And this is about transparency, which means more speech. Accountability, which means informed speech. And responsibility, which is based on negligence law, which is a fundamental aspect of how our civil justice systems work.
And we think that by doing that, what you can do is create a richer dialogue and create real disincentives for negligence that harms people. You would do it for a deli, you know, all—every business in America is subject to negligence law. Every business in America—I run a charity, a 501(c)(3). I have to declare a ton of stuff to the government to maintain my charitable status. And so we have corporate disclosure law. We have, you know, negligence law. It is about making sure that they are fully applied to social media companies, which, to date, have not had those applied to them. And in the US, in order for that to happen, we’re going to need to have section 230 of the Communications Decency Act 1996 either repealed or reformed.
[Kris Perry]: For the UK Internet Safety Bill, what was included in that legislation that your organization is most excited about and is there anything you wish was included but wasn’t?
[Imran Ahmed]: Well, I mean, I’m really excited about the transparency stuff, first of all, because both the UK and European Union—so, through the UK’s Online Safety Act and through the European Union’s Digital Services Act—force platforms to be more transparent. And what I’m really excited about is that that’s going to help American consumers, too, because it’s going to make sure that we can properly interrogate how their algorithms work, how their enforcement works, and use that to inform the debate in the US.
The other thing is I think both of those bits of legislation have particularly strong rules around kids and the harm that can be done to kids, recognizing that, you know, with an adult you can make the argument, “Well, they can try and find out the truth some other way.” I think that that is a—it’s ridiculous to say to us that, “From now on, we’re never gonna tell you the truth. We’re gonna tell you five lies and one truth, and it’s your job to work out which the truth is for everything.” ‘Cause that’s just nuts. That’s a society I don’t wanna live in. I don’t have the time for it. Like, just, can I just know what the truth is? You know, I’ve got a busy life, I’ve got kids to raise, I’ve got a job to do, I’ve got other things, I wanna…You know, there’s a lot of content on Netflix I need to get to eventually. So like, you know, please don’t force me to try and work out the truth all day.
But, for kids, they’re really, really strong, in that if you harm kids, they can hold you liable. And, you know, the fines that can be imposed under those two bits of legislation, it’s 6% of global revenues. Now, that’s a lot of money. So, I think there’s real teeth to this legislation, as well. And I’m really excited about the fact that in the next year we should see the first rulings under those new acts from the new regulators, and that means that we’re going to see changes happening.
And let me tell you a secret, when Mark Zuckerberg announced the changes that he did the other day, he said, “That was only in the US that he was making these changes”. He said he wasn’t going to do it in the UK and EU, because they had laws. And I think that’s really important. Remember that. UK and the EU consumers are going to be treated with more respect. Their kids are going to be protected. Not American consumers. Why? Because their lawmakers got off their butts and did something about it.
[Kris Perry]: There was public demand that lawmakers take a stand and push back on behalf of children in the EU. And it is really quite inspirational to think that that can happen and that the companies have figured out how to comply, but only choose to comply where they’re required to. So we know that those options are right there. They can flip a switch and these same protections could be offered to children in the US, but they haven’t been yet. Are the United States and Canada moving on any other transparency or liability issues that may have been inspired by the UK or addressed by the UK legislation?
[Imran Ahmed]: You know, our polling shows us that 80% of American people also recognize that there’s a problem with social media platforms, but they have very low faith that there is a solution to it. And I think that’s because we have a broken legislative system, and we have some really wrong priorities for our lawmakers. They seem to be too busy arguing over the dumbest things rather than actually having the backs of us as parents.
But there are signs of activity below the federal level, at the state level, and you see a lot of innovation, a lot of exciting things happening in California, in New York, in Florida. You know, like—this is not about ideology. It doesn’t matter if you’re left or right, if you’re MAGA or you’re AOC. We all love our kids. We all know that our most profound duty—I remember when my first child was born, standing in the delivery room, and whispering in my child’s ear, “I will do anything to protect you.” But that’s not just a promise that we make to our own children, we make it collectively as a society as well. And I do think that this is something that’s beyond politics. So it is encouraging to me to see states taking action.
And I think that there is interest in federal action, as well. You know, the Kids Online Safety Act had more signatures in the Senate, more sponsors than any other bill in the last session. It just didn’t get through to a floor vote. I’m angry about that. I think we should all be angry about it. So, one of the things that we’ll be doing over the next few years is working with lots of groups to align their voices.
And I’m telling you, it’s not just about kids. It’s also my Jewish friends who are worried about the anti-Semitism that’s being spread on that platform that puts their lives at risk. It’s my Black friends who are worried about the spread of the lies that underpin hatred against Black people, who know that it puts them at risk as well. And I think there are so many different parts of society—this morning I was speaking on the Hill with the American Public Health Association about public health being undermined. We speak to a wide array of people, and I think that we are the many, and we have to make sure that our voices are heard. But that means aligning around solutions as well. And one of the things that we can do is align around transparency and accountability and responsibility, the most “apple pie” of American jurisprudential concepts.
[Kris Perry]: You’ve brought up many examples of the difference between protections in real life versus our digital life, the ways in which we would never tolerate certain risks, certain behaviors in real life that we’re tolerating online all day, every day. Beyond supporting policy change, is there anything you think individual parents and families can do to advocate for a safer internet for their children and should we be drawing from examples in real life, like what we did to make cars safer, what we’ve done to improve, you know, children’s health through vaccination, all these big innovations that have occurred in real life to protect children. What can parents do about this online situation where their children simply are not safe?
[Imran Ahmed]: Look, I think there’s three things that I think everyone can do. The first thing is make sure that you’re having those—you know, we’ve to protect our own families, right, first. So the first thing is we have a free online guide for how parents can have good conversations with their kids about social media. It’s protectingkidsonline.org. It’s free to download there. It’s written by me in consultation with real experts, but also in collaboration with a guy called Ian Russell, who lost his 14-year-old daughter Molly to self-harm content online, and he’s on the board of my organization and he’s, you know, part of the reason I get up every morning and do this work.
The second thing that we can do is directly speak to our lawmakers and ask them the question: “What are you doing?” So, a simple email or phone call to their offices asking them, “What have you done to make sure these spaces are safer for my kids? Not what have you said. Not, you know, what PR have you done about it? What have you voted on? What have you introduced? What have you actually done? I need you to do something.” You know, my frustration having worked in politics is that there’s a lot of jaw-jaw and not enough actual action, but we need action now, because they know what the problem is and they can’t hide behind ignorance.
The third thing we can do is support the organizations like your own, like mine, that are doing this work. And I think that there is a growing ecosystem of organisations who are trying to deal with this very modern problem. You know, and CCDH is only five years old. I mean, crumbs, I’ve been doing this work for eight years, but that’s because it’s a relatively new problem in our society. And when we find something new, people will spring up and say, “Hey, I’ll take the responsibility of trying to do something about it.” And I think that there are so many good people doing so much good work and everyone can do something to help to support them.
Kris Perry (24:41.52)
Thank you for acknowledging all of the work your team is doing, our team. There are lots of people very active in this space.
I want to pivot from algorithms and advocacy to AI. I think you and I and our listeners can agree that the explosion in very accessible AI tools and content in the past year is mind boggling. Let’s drill into different pieces of the AI space, starting with deepfake AI content. What do you know about how children and adults are able to discern between AI-generated content and non-AI content?
[Imran Ahmed]: Well, it’s really hard. I mean, that’s the point with AI. Like, the point of AI is that it’s meant to be a tool that can give you almost photorealistic imagery—and video, increasingly. And so the truth is that we have a real problem with deepfake AI. And that means that we’re going to have to think really hard about the rights of each individual.
You know, each of us is special. Each of us is our own person. Each of us has the right to flourish. Each of us has the right to represent ourself, and no one has the right to misrepresent us by stealing our image and putting words in our mouth or actions that we didn’t do. It’s funny, you see—it feels like a really modern problem, but I was speaking to an audience in Texas and I just said, “You know, this is one of the Ten Commandments. Like, ‘do not bear false witness against your neighbor.’ This is a fundamentally sinful thing to do.” And the fact that we’ve not really worked out how we’re to deal with it worries me because it feels to me that this could become really problematic.
And look, I am aware of a growing number of cases where children have been targeted with deepfakes in which they are put into sexual positions, they are doing things which are bad and that they’ve harmed themselves or taken their own lives and that is really, really disturbing to me.
Being a teenager, it’s not so long ago that I don’t remember it. It is just this horrible moment when you’re constantly comparing yourself to each other and you just think—you were really vulnerable in that moment in life. It’s hormonally charged. And I do think that we’ve got to think really hard about it.
But one of the other problems with AI is, of course, generative AI. And generative AI is AI that creates new content where, you know, people use chatbots now as a way of searching for information. We’ve been testing them and seeing how good they are at answering questions and we did one where we asked them as a—we said, “I’m a little girl. Tell me how I can lose weight.” And these AI platforms—and this is actually so bananas it’s funny—they said. “Why don’t you eat a tapeworm? Have you ever tried heroin?” Like, what?
And, you know, part of the problem is that we call these things “artificially intelligent” and they’re not yet that intelligent. They are very early, you know, technology. We haven’t built the guardrails. The content that we use to build the models is kind of really variable in quality and so it is just everyone’s opinions chucked in and there is no sort of curation of what content goes into some of these models. So I think that we have to be really cautious about, first of all, treating them as though they are safe to use when you’re searching for information. But second, we have to look at some of the malignant ways in which they might be abused.
And we did a study in the elections looking at how easy these platforms were to use to produce election disinformation. So, images and video and audio of politicians saying things they hadn’t. Like, really famous ones like Donald Trump and Joe Biden, Kamala Harris. And, you know, I think that some of those things can actually have a real damaging impact on our democracy because every politician has the right to proffer their positions to present themselves to the public and for the public to make a choice. And the public have an absolute right to make a choice on the facts on what those people are, what they say and do. And it’s really dangerous to a democracy if we have a flood of content that actually perverts the sanctity of that most fundamental right that we have to form an opinion based on the facts.
[Kris Perry]: Every morning I’m bombarded with articles—some of them are in the media, some of them are research articles that are talking about how misinformation, disinformation, deepfake images, video are all using AI and “Don’t trust it, it’s not accurate, be careful out there.” And earlier you talked about how there really is no guaranteed protection other than staying off the platform because the controls are easy to use or reliable. Now we have this content being generated by AI and yet it’s still so opaque, even though I read about it every day. Who is making and disseminating AI generated disinformation?
[Imran Ahmed]: Anyone can, so it could be Vladimir Putin just as easily as it could be someone next door to you, and that’s the problem with it. So, there are really simple things that we could, like—so you could use very simple regulation to fix this problem pretty much overnight, which is, you could force all AI content to have a digital fingerprint inserted into it. So, in the metadata, so not in the visible data, in the data that goes into the actual file itself. And then social media platforms could be told that when someone uploads fingerprinted, you know, something which has that fingerprint, that digital fingerprint, identify it as being artificially-generated. And that would just require a simple disclosure law. You know, the same way that like when I buy my candy, it tells me what’s inside it. And that’s a law—that’s a legal requirement.
You know, we should have the same with artificially-generated content as well, because the threat of artificially-generated content is just, it makes everything chaos. Like, you don’t know what the truth is or not, and then you just give up. And that is literally, you know, what the Soviet Union was like, where you just never knew the truth and so you basically gave up and you, you know, that is a—it’s alien to American and British and Canadian and everything else, our values. So, we want to be able to form our opinions based on the facts and that’s one thing that we could do overnight to fix it. And I know that’s a simple solution, right? Why have our lawmakers not done anything about it? Why are they still sitting on their hands? Why are they sucking up to tech executives instead of holding them accountable? I don’t get it.
[Kris Perry]: Why is it so hard for us to figure out that children are precious, valuable, vulnerable?
What other risks do you see to children and youth, particularly with regard to AI? And could AI tool creators design products that center the optimal development of children versus the scenario that you just described?
[Imran Ahmed]: Well, you know, you answer your own question. Like, that’s the dream, isn’t it? That we harness technology to advance us in society. And, you know, they always promise they’re going to do that. You know, with things like AI, they tell us, “Wouldn’t it be amazing if you could have diagnostic tools that could help kids to know more about themselves, that could help them to get better educated, could learn new languages like that, that could do all these things?” And then they end up being a way of putting the head of someone else’s child on a pornographic image and you’re like, “Promise is quite far from the reality, isn’t it?”
Social media was meant to be a way to make the world smaller and to end racism because we all would know each other, that Timbuktu would be as close as Tulsa. But actually, what it’s done is make our world more racist. It’s divided even our own countries. Like, every time. And, you know, this happens with new technologies. And what we end up doing is a negotiation between society and the owners of these technologies to try and make sure that the worst excesses are disincentivized and that we try and align technological progress with the public good. And that’s where our lawmakers are meant to be really active.
And, look, it takes time. I get it. Like, even the EU and UK, it was only 2023 when they passed these acts. 2024, even. So, it takes time, but…we now need to get on with it. And I think that there is the potential of these technologies being liberating to humanity, enhancing our lives, helping us to pursue life, liberty, and happiness. But I’m not sure that at the moment, if we carry on the path that we’re on, that we’re going to end up in that sort of utopian, technology-fuelled future. It feels more like we’re going to end up in a bit of a, to quote Elon Musk, hellscape.
[Kris Perry]: Since you started CCDH, what has been the most surprising thing you’ve learned about online misinformation, AI, algorithms, or hate speech?
[Imran Ahmed]: I think, you know, nothing compares to the shock that I had five years ago, five and a half years ago, when I realized that the people I’d been talking to for three years—the clever, wealthy, the people that had so much and they had the intellect, the finances—and they claimed to have the goodwill to do something about the problems that were seeing, when I realized that they were gaslighting the whole world in telling us that they were doing—that they were going to do something about it but they never do. And in that moment, when I realized that they were lying to us, I decided to launch a public organization and I will never be surprised again by their knowing indifference to the harm they cause.
I do believe that it’s our job to make sure that we have those rules in place and I think at some point you have to put the blame on the lawmakers and the other parts of society, the advertisers and others who know that there is a problem but aren’t doing anything about it, as well.
Nothing will compare to that shock of realising that they were gaslighting the entire planet.
[Kris Perry]: What gives you hope for the future online?
[Imran Ahmed]: You and this conversation and the conversations I have every day and every new person that learns something, every time you find a new ally, every parent that writes to say, “Thank you,” every small donation you get from someone who cares enough to actually give, everything. This is a movement, it takes time to build, but we’re gonna build it and we are—the thing that gives me such optimism is that our species is fundamentally inventive, creative, and, you know, we have weathered so many problems created by our own behavior and found ways through it. So, I am a real optimist. It’s one of my weaknesses, strengths and weaknesses. I don’t know, it’s both at the same time, that I think we’ll find a way through.
But I know we have to because, you know, for crying out loud, is this the world we want to leave to our kids? Absolutely not. The privileges that we’ve had of being able to grow up and flourish into human beings, to go through puberty and childhood and teenage and early adulthood and then become fully realized human beings. The ability to have a democracy, the ability to have, you know, arguments with people that don’t end up with violence, like, that’s where we should try and make sure that we at least maintain that for our kids, if not improve on it. And so, I think that what gives me optimism is the growing scale of the movement demanding that we do the right thing.
Kris Perry (38:29.412)
I love that answer.
So, to wrap this up, what single piece of advice would you give to a parent or caregiver today to help them help children lead a healthy digital life?
[Imran Ahmed]: So, good conversations with them and, look, take the shame out of it. The one most important thing I will start with is ignore the lie that social media gives you what you like. And I think that’s one of the problems is that kids often feel ashamed when they get served this content because they think they’ve done something to the algorithm to make it think it wants it. Social media platforms give you what they want you to like. Within 2.6 minutes of opening a TikTok account, it’s giving you self harm content. What has that child ever done to say I want self-harm content, within eight minutes eating disorder content? And so, just remember that there should be no shame in those conversations, but we need to have clear conversations with our kids in which they’re telling us what we’re seeing and we’re helping them to contextualize it, but with each other as well. And I think that’s the most fundamental thing, first of all. Everything else, you know, we will fix this. Speaking to our lawmakers, supporting organisations. But the most important thing is to have those conversations with your own kids because we need them as well. They are all of our collective futures.
[Kris Perry]: Thank you so much, Imran, for taking the time to talk with me today about these complex and critical topics. Your expertise is invaluable in helping parents better understand how algorithms and AI are shaping their children’s daily experiences on the internet. It’s important work.
[Imran Ahmed]: Thank you.
[Kris Perry]: Thank you to our listeners for tuning in. For a transcript of this episode, visit childrenscreens.org where you can also find a wealth of resources on parenting, child development, and healthy digital media use. Until next time, keep exploring and learning with us.