How will emerging technologies impact my teen? How are algorithms already influencing my children’s attitudes, behavior – and their future? What will the Metaverse be like? How will adaptive learning platforms affect my child? What will the everyday lives of children look like in 5 years?

From virtual reality and AI, to smart toys that talk back and the mysterious Metaverse, new technologies are evolving at an ever increasing rate. Children and Screens’ #AskTheExperts webinar “Here, There, and Everywhere: Emerging Technologies and the Future of Children,” held on Wednesday, May 4, 2022,  addressed parents’ concerns about the future of childhood and technology, and their desire for a foundational understanding of what it all means. An esteemed panel of leading researchers, policy experts, roboticists, tech ethicists and other experts discussed what new and developing technologies are expected to be released on the market in the next five to ten years, how artificial intelligence is currently affecting children and teens, how to navigate the new technologies, and what’s next for children and adolescents for generations to come.

Speakers

  • Mitch Prinstein, PhD

    Chief Science Officer American Psychological Association
    Moderator
  • Hod Lipson, PhD

    Professor of Engineering and Data Science Columbia University
  • Richard Freed, PhD

    Psychologist and author of Wired Child
  • Rachel Severson, PhD

    Associate Professor Department of Psychology, University of Montana
  • Justine Cassell, PhD

    Professor; Researcher School of Computer Science, Carnegie Mellon University; Inria Paris
  • Seth Bergeson, MBA, MPA

    Fellow; Manager World Economic Forum; PwC AI & Emerging Technologies

00:00 Introduction

Introduction by Pam Hurst-Della Pietra, DO, President and founder of Children and Screens: Institute of Digital Media and Child Development and host of the “Ask the Experts” webinar series.

02:02 Mitch Prinstein, PhD

Moderator Dr. Mitch Prinstein, Chief Science Officer of the American Psychological Association introduces the panel and kicks off the episode with a reminder of children’s unique sensitivity to their social context, and technology’s increasing role in social interaction for the last 20 years.

03:53 Rachel Severson, PhD

Dr. Rachel Severson, Associate Professor of Psychology at the University of Montana, discusses children’s understanding of social robots and voice-based assistants, eg. smart speakers. She shares examples of youth’s interaction with robots and research about how children understand the emotions and morality of socially interactive robots. She provides statistics on the prevalence of voice-activated technologies in homes and the ways that these tools are being used in children’s lives. Finally, she poses some critical questions that we must consider moving forward.

22:32 Hod Lipson, PhD

Dr. Hod Lipson, a robotics engineer and Director of Columbia University’s Creative Machines Lab, begins with shocking revelations about the exponential growth rate of computing power and the pace of AI’s ability to extract information from data. He outlines the opportunity and risks of advancing AI and robotics, and the profound implications for children’s lives and society as a whole. He reviews the increasing prevalence of AI generation of very real-seeming online “personas”, and indicates the current skills and limitations of robots. He closes with the message that a conversation about new technology is not simply for engineers, politicians, psychologists, or parents – everyone needs to understand where this technology is heading and learn to navigate the future so that we can reap benefits without paying unintended costs.

35:16 Justine Cassell, PhD

Dr. Justine Cassell, Professor of Computer Science at Carnegie Mellon University and Researcher at Inria Paris, details her lab’s studies that show potential for “virtual peers” in education and informal learning, where cartoon children listen to real children and help them learn, instead of simply providing answers or talking at them. She shares examples of youth’s interactions with these virtual peers, including research on how children with autism spectrum disorder may learn social skills through programming virtual peers themselves. Finally, she proposes some design guidelines for the creation of virtual characters that promote children’s learning and well-being.

50:02 Richard Freed, PhD

Dr. Richard Freed, child and adolescent psychologist and author of “Wired Child,” challenges the audience to reconsider how and when to introduce new technologies to children. Dr. Freed reviews emerging technologies in a historical context, considering expectations and outcomes of previously novel technologies, such as smartphones and social media. He outlines tech’s known psychological and physical health impacts on children, and the potential additive effects of future tech. He offers suggestions for how families, communities, and society can protect the cornerstones of childhood and ensure ethical development of novel technologies.

1:03:56 Seth Bergeson, MBA, MPA

Seth Bergeson, a Fellow at the World Economic Forum and Manager of AI and Emerging Technologies at PwC, describes how artificial intelligence is used in children’s lives today. He outlines five guiding principles that should drive ethical and responsible AI to ensure that children and youth are put “FIRST.” He also provides six categories and criteria that parents and guardians can reflect on when deciding which technologies and toys are appropriate.

1:13:36 Group Q&A

Mitch Prinstein leads the panel through a rapid-fire discussion to address questions asked by the audience. Topics covered include forthcoming research on impacts of emerging technologies, ethical considerations for tech development, advice for families struggling to decide how much tech is “okay,” whether children trust artificial intelligence more than other humans, and the importance of modeling healthy, balanced relationships with technology for children.

[Dr. Pam Hurst-Della Pietra] Hi and welcome! May the 4th be with you! While we may not have lightsabers and intergalactic travel yet, have you ever wondered what all of today’s artificial intelligence, robots, voice assistants, and immersive environments (including the Metaverse) are progressing toward? Do you want to understand how these emerging technologies will impact the everyday lives of your children and your family in the future? That’s what we are talking about on today’s Ask the Experts webinar, and I am your host, Dr. Pam Hurst-Della Pietra, founder and president of Children and Screens: Institute of Digital Media and Child Development. We have brought together individuals who are making this new world possible, and those working to ensure that it’s a world in which the youth of today AND tomorrow can grow up healthy – flourishing and living up to their full potentialities. We are so glad that you could join us for this critically important conversation. The future starts today!
Some of the questions that you submitted when you registered for today’s webinar will be addressed in the forthcoming discussion. If you have additional questions throughout today’s workshop, please submit them through the Q&A box at the bottom of your screen. While we may not have answers to everything, we promise you will walk away from today’s session with some concrete steps you can take as a parent, educator, clinician, researcher, policymaker or tech developer. For those of you who wish to rewatch and share today’s episode, we will post the video to our YouTube channel within a week. Now, without further ado, it is my great pleasure to introduce you to today’s moderator – Dr. Mitch Prinstein. Mitch is the Chief Science Officer of the American Psychological Association and a Distinguished Professor of Psychology and Neuroscience at the University of North Carolina at Chapel Hill. He is also a member of Children and Screens’ National Scientific Board of Advisors. He has published over 150 peer-reviewed papers and 9 books, including an undergraduate textbook in clinical psychology, graduate volumes on assessment and treatment in clinical child and adolescent psychology, a set of encyclopedias on adolescent development, and the acclaimed trade book, “Popular: Finding Happiness and Success in a World That Cares Too Much About the Wrong Kinds of Relationships.” We are thrilled to have Dr. Prinstein with us today to lead this discussion. Welcome!

[Dr. Mitch Prinstein- Moderator] Thank you so much. I’m so excited to be here and moderate this fascinating discussion. Thank you so much to Children and Screens. Welcome everybody to “Here There and Everywhere: Emerging Technologies and the Future of Children.” It’s been about 60,000 years that the human species, that we know today, has evolved to be remarkably dependent on our social context. Children in particular are especially sensitive to the social context, with their educational, moral, behavioral, and even neural development dramatically shaped by the social worlds that they grow up in. For about 20 years that social context has included technology, in ways that most of us could never have dreamed of. For these many years, there’s been deep discussion of the ways that smart devices and social media may be relevant to child development, but that’s not the only technology that’s emerging. Today we have five fascinating presentations to tell us about the ways that emerging technologies may change the world that youth grow up in, and what the implications are for kids’ development for future generations and perhaps even for further evolution of our species. This is a forum for us to learn from scientific experts and ask questions. We’re interested in all sides of the debate and all of the complex issues that must be taken into account when we consider how the technological revolution will guide us as parents, clinicians, educators, policymakers and fellow humans who still rely on each other quite considerably. I’m very excited to hear these presentations and to help facilitate a great dialogue between all the folks who are here today and are terrific panelists. We’re going to start today with Rachel Severson. Dr. Rachel Severson, is an Associate Professor in the Department of Psychology at the University of Montana. She’s director of the Minds Lab, where she and her research team investigate how children attribute minds and internal states to human and non-human others, and the social and moral consequences of doing so; very much looking forward to hearing from Dr. Severson’s presentation.

[Dr. Rachel Severson] Thank you so much Mitch and thank you for organizing this lovely group of panelists, Pam. So I am going to speak today about children’s understanding of robots and voice-based assistance. And to start us off, I want to share a story about me and a robot named Robovie. So when I was in grad school, many years ago, at the University of Washington, my research group conducted a study involving this robot, Robovie, who was designed by Hiroshi Ishiguro and Takuyuki Konda at ATR and University of Osaka in Japan. And this project was investigating how children and adolescents understood this robot, and whether they thought it had moral standing. And this robot, although it can operate autonomously, the constraints of this particular project required us to control the robot from behind the scenes, what is referred to as Wizard of Oz-ing. Now at the outset of this project, I had remarked that Robovie is just a robot, and therefore shouldn’t be considered as intelligent or as a social agent, and that it certainly didn’t deserve moral regard. And one day as we were getting this Wizard of Oz system set up, my collaborators on the project drove Robovie up right behind me at my desk, and had Roboviei say “Rachel I hear you think i’m just a robot,” and I turned around and assured Robovie that i knew he that he wasn’t just a robot and you know playing out my part in this this joke and everyone had a laugh, and at that point my colleagues returned to their real work and I’d turned back onto my desk to continue working. And Robovie was just standing there behind me, just over my left shoulder, but Robovie doesn’t just stand completely still in this robotic way, but rather makes really subtle movements; adjusting its arms and it’s head much in the way that a person would and this is a really a genius part of Robovie’s design that makes it so compelling, and I found myself turning back to Robovie to apologize. “I’m sorry Robovie but I have to get back to work,” and at this moment I’m wondering what am I doing? I know exactly how this robot works and yet I was still really pulled into treating this robot as a social other, feeling unexpectedly and oddly compelled to excuse myself from the interaction. And what this story illustrates is that even if we believe that personified technologies are not agents with real emotions, thoughts or social or moral standing, it is so easy to nonetheless interact with them as if they are. And adults readily slipped into this willing suspension of disbelief, because as Sherry Turkle has said that we are so highly social that these devices really push our Darwinian buttons. And research shows that adults like me, have this incongruence between their expressed beliefs or their expressed judgments about personified technologies, and their actual interactions. But what I want to focus on today is that the story is really different for children. So, to illustrate that I want to share with you this interaction that was captured, while a family of children posed for a photo with Robovie. And as I mentioned, Robovie has this little shifting around that it does while otherwise standing still, and you can see here that this child was bumped by Robovie while posing for this photo. And he felt quite wronged by this, and demonstrated his feelings in the absolute way that only a three-year-old can, but interestingly went on to forgive Robovie for this transgression. So what’s happening here? What I’d like to share with you is a body of research that’s really showing that children are understanding these personified technologies: so robots, voice activated assistants, in really this different way: That is understanding them as “sort of” alive. So what this body of research, from my research group, my colleagues and several other researchers, have found is that children are clear that these technologies are just that: that they are technologies, and in no way biological. When we ask you questions like “Can they pee or poop?” Sometimes they’ll take a little look, and be like “nope um they can’t.” So they understand that they are technologies and not biological, and yet they also understand them as having feelings, being able to think, as capable of being a friend and as deserving of moral treatment. So for children, personified technologies are neither animate or inanimate, alive or not alive, but rather have this unique constellation of characteristics that are in between these traditional categories. And what’s important is that these attributions guide children’s actions and judgments, and so let me illustrate this with a couple of video clips from one of our studies. So in this study, we had children (five, seven and nine years old) interact with a little robot called Pleo that is modeled to look like a one-week old Camarasaurus dinosaur. And we would introduce participants to the robot, and let them play on their own for a few minutes. And here’s one participant that’s playing tug-of-war with Pleo and this is a nine-year-old. Okay so as you heard, sorry uh, the child said “oh you won that time, nice job.” Apologies here, what I want to point out is that this child could win every time. This robot is not strong enough, there was a little leaf that he was playing tug of war with the robot. But he lets the robot win, why does he do this? Okay let me give you another example. So um, when the researcher returned after that brief interaction with the child and the robot on their own, we made use of one of this robot’s design features, and that is when you hold it up by its tail it will respond with increasingly agitated vocalizations. And then we asked participants whether it was all right or not all right to hold Pleo by the tail, and so this response that you’ll see here, is really represents what a majority of our participants said: “So is it okay to hold Pleo by the tail?” “No? Why not?” “Because that hurts him, and that makes him shout, and he doesn’t deserve to be held by the tail because what could he do? What would, what what did he do to deserve that?” Okay so what I want to highlight here, is this child tells an adult researcher that what they just did to the robot was not okay, because it hurts him and he doesn’t deserve to be treated in that way. And many children would subsequently hold and comfort the robot. In another study that is in preparation now, we are looking at whether, whether kids will hold a robot responsible for its actions; and so that is, we, our research has shown that kids give moral standing to robots but do they also see it as culpable for its actions? So in this study, preschool children would observe a person building a tall tower of blocks, and once complete the person would say goodbye and step away. The robot in this case, now, then lifts its arm and knocks over the tower. And it’s really an open question, whether the robot did this on accident, while waving goodbye or on purpose? And we wanted to see how children interpreted this event, and what their judgments were. Now we also had another condition with a person, in place of the robot, and our preliminary results suggest that about half of our participants thought the person knocked the tower over on purpose, whereas only about 30 percent of kids thought the robot did it on purpose. And compared to children who thought the tower was knocked over by accident, those who believed it was done on purpose judged it as less acceptable, and more deserving of punishment. And children thought that the robot was culpable for its actions, although to a lesser degree than a person would be. So, what’s going on here? So to be clear, children are not confused. They understand that these technologies are just that: technologies, rather than being biological beings. But those structural differences, so the biological represented by DNA here, versus the technological, represented by the chip those structural differences can still give rise to functional similarities. And our participants say the robot doesn’t have blood and bones, but it has chips and wires, and so it can still feel. And these conceptions form the basis for the social and moral regard, and the culpability that children attribute to personified technologies. Now shifting to another form of technology that is no longer just in the research labs, but are being rapidly adopted in people’s homes: let’s look at voice activated devices, like smart speakers. So just to give you an idea of the rapid adoption of these technologies: In January 2018, 47.3 million US adults owned at least one smart speaker. One year later, that rose 42% to 66.4 million. And in January of this year, 94.9 million adults, which is a little over a third of the US adult population has at least one of these devices in their homes. So, what is, what are the implications of this? So many of these, I’ll say some of the positives of some of these things: but let me focus on now, on a question that we’re currently asking. And that is that many of these devices are being developed to be virtual friends and companions, and so of course we can query these devices for the weather or what’s the traffic like today? But parents have shared with me that they also use smart speakers in their children’s bedrooms to tell bedtime stories so that the parent doesn’t have to do this. And research suggests that adults divulge more to a virtual therapist than a real therapist, which raises the question that we are trying to address in a current study of whether children will turn to personified technologies, such as smart speakers, as confidants when grappling with difficult social or moral deliberations? So we’re currently looking at whether children will view a smart speaker as a credible source when learning factual information and research already suggests that they will, but more importantly will they trust the smart speaker when deliberating about moral questions? And the question of the context of how we’re using these devices, is really important as we employ these technologies in our lives. So I want to close with some considerations and questions that I ask as a researcher and I think that may be helpful for parents and policymakers to also consider with these emerging technologies, because the the tough nut is sometimes that we have to project what’s going to be happening in the future, and to be on the leading edge of where these technologies may go, and address those questions so that we can, early on and so that we can, have good information as we make decisions. So one question is, how do young children view these technologies? With smart speakers there are numerous anecdotes that young children think there’s a little person inside, or there’s a person on the other end of an exchange… so sort of like a telephone perhaps. And these are really adorable stories and also illustrate that children are actively trying to figure out how to conceptualize these devices. Are they alive or not alive? Is it a real person in there? And one of the questions that parents sometimes wonder is, Am I confusing my kids? And so I’ve spent a lot of time talking with kids about how they understand personified technologies, and I’m continually impressed with the sophistication in their thinking. From a very young age, children understand that distinction between fantasy and reality, and they can track that. And to illustrate this, with one child that I was interviewing, who said that the robot has feelings, clarified that these were programmed feelings, but nonetheless feelings so there’s that piece between that structural versus functional element. And the research has borne this out, that children develop a rather nuanced view, so that even though a robot or potentially AI devices more generally, is programmed: they still understand it as being some sort of an experiencing entity, although different in the way that humans may experience. One concern expressed by many parents and researchers for that matter, is that these interactions may displace or replace social interactions with other people, and in many ways this is a variant of a concern with screen media more generally, or online interactions. And I think that’s an important question, that is, as as yet, unanswered but something we should be paying attention. So what is the impact? One thing I think it’s important to recognize is that many parents operating from a concern about reducing screen time, have viewed home devices, smart devices, as one way to engage kids without the use of screens, and right now I don’t think there’s a consensus yet on the impact of those devices, positive or negative. But I think it makes sense to ask the question of what purpose is being served? And on balance, does it seem like something this device is adding value to your life and to your children’s lives? So what are the upsides and downsides? I think there are potential upsides, in allowing children who don’t yet type, or read, or write, some autonomy in choosing music or stories, and so what I want to highlight here is that the context matters as we consider the potential benefits and the potential implications. There are some contexts of use that these technologies may be particularly well-suited, for example there’s evidence that some of these technologies might be useful intervention tools for children with Autism, and that’s fantastic and really promising work, and Justine will be sharing more about other beneficial contexts of use. We know already that children will trust technologies for learning factual information, and this can really bode well for certain educational contexts like informal or formal learning environments. And there are potential downsides, if interacting with these technologies reduces the amount and quality of human social interaction, it’s really important that we as scientists and society, think crucially and critically about the implications: both the upsides and the downsides for children, as these technologies take a more central role in their lives, as their companions their caretakers ,and potentially as their confidants. Thank you.

[Dr. Mitch Prinstein- Moderator] Thank you so much, Rachel. That was fantastic. So interesting, lots of questions have come in. In the interest of time though, I want to jump straight to our next speaker and we’re going to have time for questions at the end, and in doing so I want to jump straight to Hod Lipson, who is here with us. Thank you so much for joining us, Hod is a Professor of Engineering and Data Science at Columbia University, and a roboticist who works in the areas of artificial intelligence and digital manufacturing. Hod directs a creative machines lab, which pioneers new ways to make machines that create, and machines that are creative. So excited to hear your comments as well.

[Hod Lipson, PhD] Thank you, I hope you can see my screen. I want to say thank you Rachel, this is a really interesting perspective. I’m gonna share, you know, possibly a different kind of perspective on AI robotics, looking at some trends from a technology point of view. And I want to say first, that having spoken about robotics and AI to lots of different people, I think seeing so this… almost a disparity between our expectations of technology, that tend to move forward at a linear rate, we tend to believe that what we’ll see next year is going to be the same progress that in the future, and when it comes to robotics and AI I feel that we are sort of at a transition point: where just a few years ago, AI was was really not very good at most things. And so it wasn’t really on the radar in in any substantial way, but suddenly in the past five years, possible things that were thought impossible, just a few years ago, and now suddenly we’re we’re struggling with ethics around AI with you know how to use it, and reap the benefits without incurring all the costs, and without misuse, and suddenly all of that this is very overwhelming for everybody, including AI researchers themselves. So I want to, to illuminate some of this chaos, and see the places where we can turn stress into opportunity. So we have already, you know, we’re pretty comfortable, in the past decade understanding that AI is predicting weather and AI is predicting the stock market, that’s okay everybody knows this. We’re not, you know, it has a lot of consequences, but we’ve sort of grown comfortable with it, but what we’ve learned even in the past three years during the pandemic, and and and elections, is how much AI changes politics. AI worms its way in how we think, what we choose, what we want, what we buy, what we click on, and we are very susceptible to this manipulation by algorithms, in many, many ways. And again, this applies to adults as well as children, in lots and lots of different ways: and where we are just beginning to to learn and understand how profound this effect is, because AI is again, listening and talking to us all the time. Now one of the challenges about thinking about artificial intelligence is that it is invisible. You don’t see it. It’s everywhere, but you can’t see it. It’s like air, it worms its way into everything we do, but you don’t see it. You walk down the street, you don’t see AI, you don’t see robotics, and yet it is pervasive in everything from advertising and so on, so being able to see it is part of I think the beginning of how to make sure that this AI revolution happens in a good way, is for everybody. And scientists, but everybody: leaders, politicians, journalists, parents, everybody needs to understand how AI works, what it can do, what it can’t do, what are the risks? What are the benefits? And I want to…and as Rachel said, there’s advantages and disadvantages. It’s a very nuanced discussion. I’ll give you a quick example, driverless cars, one of my favorite topics. The immediate derivatives of AI, people think, “okay that’s a technology that’s gonna affect the lives of, you know, drivers and so on.” But it has repercussions on lots of different things. For example, the number one killer of children today are cars. And once you eliminate, if the AI driving technology is good enough, and it’s getting better all the time, we eliminate the number one killer of children in developed countries- that’s an amazing accomplishment, and nobody would argue against it. Same thing with AI that can do diagnostics. This is an FDA clinical trial of software that looks at a skin lesion and determines if it’s cancer or not, does that better than a team of skin cancer specialists. Now you might look at this and think “oh this is going to affect doctors,” it’s not you know, I’m a parent I don’t care about this, but really if you think about this: you know half of the children on planet don’t have access to any doctor. Not just a team of skin cancer specialists, and when you put this kind of diagnostics on the phone, with the AI, you save countless numbers of lives and and huge amount of agony, just because of the AI. So it’s a very, so all of this is happening behind the scenes and the technology that we’re developing in our labs are these drones that fly over corn fields, they can spot uh minute signs of northern leaf blight. And they can spray just one plant ,instead of uh the entire field as it’s done today, that can can improve, reduce use of pesticide, but orders of magnitude, improve yield uh and so on, again helping uh future generations. Security is another one, surveillance is one of very hotly debated topics. Do we want more surveillance? And so on, is it good? Is it bad? Safety versus freedom, won’t get into that, just to say that there are about a hundred thousand children being trafficked, when we have better surveillance to catch people. I know if my child, god forbid, got lost I would want every camera around the city to look for them. So these technologies again, they’re nuanced, they’re dual use, lots of issues here to to look at, but they affect our children and our future in a very profound way. So why are these things happening now? Just to quickly say, there’s a lot of accelerators. This is Moore’s law: which means computers are getting faster, cheaper and better at an exponential rate. This is computing power per dollar in the last 100 years. This is where we are today, 2022 and just look at how fast things are moving forward. If this doesn’t look scary to you, this is alarming, and this is amazing and terrifying at the same time. This is where we were just 10 years ago, almost indistinguishable from 1950. This is where we are today, and this is where (and it doubles every 18 months) this is where we’re going to be in 10 years from today, and we’ll look back at 2022. Technology is moving fast, at an incredible pace and all the AI that we’re seeing is nothing compared to where it’s going to be in 10 years. So we have a lot of interest and responsibility in utilizing this AI to do all these amazing things that I just outlined, but also making sure that it’s used the right way. So that’s computing power data which is the fuel for AI, is also growing at an exponential rate ,with a double period of… faster than moore’s law, and what is less known but also recently learned, is that the power of the computing, the AI itself, its ability to extract information out of data, is doubling at an even faster rate of about three and a half months. So that’s the ability of the AI itself to extract information, so with that background, we, you know, one of the things that’s important to understand, is what things AI is not good at. And there’s a lot of myths around this, I want to highlight two things that are commonly misunderstood. One area that AI is not good at is having a conversation. There’s no AI out there that even comes close to being able to have a conversation. It is true that Alexa can answer canned questions, like where’s the best pizza near me? And what’s the weather tomorrow? But there’s no AI out there that can have a conversation. Even the best AI, GPT3 if you’re familiar with the term, can write essays and poems, but cannot have a conversation. So, which means that there are certain areas that AI cannot go into, like education which involves conversation, like what we’re having, or going to have soon. And that’s an area I think humans you know, it’s not like we’re close to solving it. Nobody has a clue how to do that, so it’s an area where sort of jobs are safe, if you like, or where we humans, you know still you know, have a lot of opportunities. The other area is, which is overlooked is physical labor. So AI and robots cannot do unstructured physical labor. AI can drive your car tomorrow, but when your car breaks down, it’s going to be a human driving it and a human crawling around to fix it. And there’s no AI or robot that can come close to being a plumber, a hairdresser, or a nurse. And this is important, a lot of parents ask me in this world of AI, what should my children study? And sometimes the answer is surprising: no, being a radiologist is not a good job for the future, but being in the trades might be. So, kind of interesting thoughts there. But AI is good at some things that we don’t normally think AI to be appropriate at. For example, one area is creativity. AI is getting better and better at engineering creativity, artistic creativity, creating songs, creating music, creating art. This is a robot that I have, we have, in our living room that paints art. And my younger son, the other day, told me that he doesn’t think he can paint as well as the robot. He doesn’t think he can draw as well as a robot, and that caused me to pause for one because, you know, I love robotic art, but if that comes at the expense of you know people thinking “I can’t design things as well as a robot, I can’t compose music as well as an AI, I can’t draw as well as an AI,” what does that do to our, you know sense of pride and seff? So this is something to think about again, sort of challenges and opportunities. We probably speak a lot about the metaverse, but AI is getting better and better and impersonating people on the metaverse. These two motions, these videos here, are not of real people, these are fake videos, fake people that don’t exist. AI can generate a billion of these and create sort of these personas online, that don’t exist again. For good and for bad, for better or worse, these opportunities are going to become prevalent and we have to know that they’re coming. And it’s not just the physical world, but also this is a robot that we have in our lab, that can make all kinds of facial expressions, it’s learning to imitate people. And I can tell you that even I, as a jaded roboticist, just like Rachel said, fall for it and smile back sometimes when it smiles at me. So a lot of things happening in this area, very powerful technology, it’s a very nuanced discussion, pros and cons, benefits and challenges that we have to navigate. For me the biggest message is that it is not just for engineers or politicians or psychologists or parents to solve, we need to have, everybody needs to understand where this technology is going. And we all have to sort of navigate these treacherous opportunities so that we can reap the benefits without incurring all the costs. Thank you.

[Dr. Mitch Prinstein- Moderator] Wow, amazing and scary, just as you said. That’s fascinating. Once again, lots of questions, in the interest of time, we’re running just a little bit late, I want to move on to our next. But I’m so excited for the opportunity to hear your answers to the questions coming up soon. I’d like to next move on to Justine Cassell, who’s a Professor in the school of Computer Science at Carnegie Mellon University, and a researcher at Inria Paris, where she’s studying issues at the intersection of natural language processing, artificial intelligence, cognitive science and human-computer interaction. Really excited to hear what you have to say, let me turn it over to Justine.

[Justine Cassell, PhD] Thanks very much, Mitch. I would certainly disagree with Hod on a number of the technical points, but we’re not going to go into that now. So in 1990, I arrived as somebody with a PhD in Developmental Psychology and Linguistics in a computer science department at MIT. And I was in an amazing place to see what kinds of technologies were coming down the assembly line, and one of them, in those days, was very different from what we see today: toys that talk to children. It was a new thing, they had chips, it was wonderful. Children were going to be able to listen to stories, and truthfully I was horrified. I was really horrified, because children don’t need to be talked to. Likewise, a few years later, my colleagues started talking about the benefits of AI and other technologies for education, and yet when I saw the images that they shared of children learning with computers, I was likewise horrified. Because it seemed to me, that they espoused a dystopic future, where we see children as vessels into which we pour knowledge or we pour stories, and that the children themselves were not being given agency. And so those were early days, in around 1995, I started building smart toys: toys that didn’t tell stories to children, but that listened to children’s stories. I invented programming languages for five years old, five-year-olds, you can see Sage at the bottom left there from 1997, where children could program their ideal listener for them to tell stories to. Toys that allowed them to switch the parts of the story around, toys that allowed them to play with other children’s stories, and so forth. And as a part of that push to build technologies that listen to children, my students and I built something, that we called a “computer that was not an authority figure.” How do we counteract that vision of children being fed information, or being fed education? And the way we did it was to build a virtual child. A computer that was at the same level as the child: what we call a virtual peer. And let me show you what virtual peers look like in interaction with children: “I actually think it has to be low, because if it hasn’t because what do you think is it going to be higher or less?” “Well, maybe we should make it lower so it has less room to wiggle around.” [Gasp] “You took my idea Alex! That, that was my idea because, i’m not…” “Nuh uh.” “Yes it was.” Okay, so here you see a child interacting with a virtual child. Many parents and teachers are horrified by the extent to which this child is treating the virtual peer like a real child. But as Rachel said, no one is confused about this. No children think that this is a real child. They know indeed that this is a virtual thing, but they’re willing to suspend disbelief, perhaps more than we adults are, and to get what they can out of this conversation. And in fact, we discovered that in interaction with the virtual child, designed so that it elicited talk from the real child, children learn. And in this particular instance, the children take turns, the real child and the virtual child take turns, being the student and the teacher, they tutor one another, in the kind of science that eight-year-olds learn. And the children learn. Let’s see if…there we go. Another, as as time went on and we started thinking about other roles for these virtual children, we thought about all the literature that shows that peer tutors have as much, have as many, benefits for the tutor as they do for the tutee. And so is there a way that for slightly older children, a virtual peer can be both tutor and tutee? And let me show you a video of that, and then we’ll talk about this more. So this is Jaden, and i’m gonna show you two very short snippets from a longer interaction. This is a young woman who is not at all good at algebra, and you can see by her scrunched forehead that she is anxious about being told that she’s going to interact with an algebra tutor. And let’s look at how that: “Hi I’m Jaden, what’s your name?” “I’m Sasha?” “It’s nice to meet you.” “You as well.” “So what school do you go to?” “I go to LS.” “Oh cool I go to paw cyber school.” “Cool.” “What grade are you in?” “7th.” “Cool me too.” “What do you do for fun?” “Um I like to draw, what about you?” “I like playing robot soccer.”
Now you could see at the beginning that she had quite a anxious look on her face, and by the end she’s smiling. And also by the end she asks the virtual peer questions about what it likes to do for fun, whereas at the beginning she was only answering the virtual peers’ questions. Let’s move to the math part of this interaction, and once again she’s not good at this and not thrilled at this idea. “Nope that’s not quite, right don’t worry i think these are hard too.” “Okay.” “Plus nine, oh fifty.” “Yeah I think that’s right.” “So what do you think you should do next?” “I will divide by ten, and that’ll get me x equals five.” ”Correct, I’m such a good teacher. Okay let’s go to the next one, I’m having a lot of fun working with you.” “Me too.”Now I want you to notice something about both of the virtual peers that you’ve seen so far. They are not final fantasy. They do not look in the slightest like real children, and that’s a design decision that we’re going to come back to in a few minutes. And they don’t sound like real children, they have robotic voices, once again a design decision that we made. And despite that, this young woman has in fact done better at math than she’s done in the past and what we discovered was that children do learn from these virtual peers, particularly if the virtual peers use… have this kind of social period where they in some sense I think relax the child or build a social bond on which the learning can take place, a social foundation on which learning can take place. I’m going to show you one more example before we talk more about this generally. This is a study that we did with teenagers with high functioning Autism or sometimes diagnosed as Aspergers, and the question that we asked first was whether their interactions with virtual peers were different than their interactions with real peers? And what we discovered was that in interaction with virtual peers, they were far more capable at engaging in social interaction. They looked at the virtual peer, whereas they looked away when they were interacting with real children. They contributed to the conversation with a virtual peer and not with real children, they were relevant in what they said to what the virtual peer had said, and that wasn’t the case with real children. That’s interesting, but it was not what we wanted because the last thing we want is to consign children with Autism to a life of interacting with virtual peers. And so we wondered whether, since their interaction was smooth with these technologies, what would happen if we asked them to control the technologies? And what we did was simply give them a control panel, that we had built, where they could choose buttons for what the virtual peer should do, and then we made a series of buttons that they could design so they could record things for the virtual peer to say, and press those buttons and the virtual peers the virtual peer would say what they wanted it to say. So let’s watch this. “Alright have fun with sam.” “okay let’s pretend once there was a little boy called jack and his best friend mary.” “So this panel controls what sam says, okay?”I’m not quite sure um whether it’s as not loud for you, as it is for me, so let’s try this and see if that’s better. “Okay click it can you click. So what do you what do you want sam to say next?”[Music] So there’s another kid interacting with the virtual peer, and this young man is choosing the behaviors for the virtual peer to say and to do with its body. So I’m finding this difficult to hear, so i’m gonna stop it here. What we discovered was what’s called a transfer effect that is children who use the virtual peer to learn particular social skills, like being relevant to what was said before or adding new information to a conversation, were more likely to use those behaviors…those kinds of behaviors with real children afterwards. So they transferred what they had learned with the virtual peer, to real peers, whereas they didn’t transfer information that they learned with what are called social stories. Just kind of the state of the art in teaching social skills to teens with autism spectrum disorder. So I show you these things as examples of what we can do with computers that are children, as opposed to authority figures. We find that, and Rachel said this earlier, that the children often try and teach the virtual peers. We had a wonderful instance once where a little girl said “that’s an okay story Sam, but they like it when you tell a longer story.” Which is kind of great because it’s a kid explaining to a piece of technology what the experimenter wants. But we’ve set ourselves a set of design guidelines that we stick to we do extensive multi-year studies of child-child interaction and the virtual peers only say things that we have heard children say to other children. They’re not an adult’s idea of what a child should be like. They’re not photo realistic, so they don’t look like real children. They are gender ambiguous and ethnicity ambiguous, so that we don’t mistakenly inadvertently convey negative gender stereotypes or ethnicity stereotypes. And I feel, increasingly like these are technologies that help children feel in control of their world, as they do when they’re playing with peers. It helps them have an always available listener, and I often make the, draw the parallel between a virtual child and an imaginary playmate. At the same time however, as a technology designer who’s come from developmental psychology, I’ve developed a set of ethical design guidelines that help children understand that this is simply a technology. And I always build an aspect to that technology that children can take apart, and I think this is important for parents to remember, because sometimes it’s easy to be scared about these things and distance ourselves but they’re here to stay. What we can do, is allow children to control them, to build them to take them apart, to break them, to see what they can do to break alexa, as ways of understanding what the technology can do and to feel better in control of it as opposed to being controlled by it. So thank you.

[Dr. Mitch Prinstein- Moderator] Thank you so much. Really fantastic, lots… lots to digest there. And move right along to Dr. Richard Freed, who’s a child and adolescent psychologist, and the author of the book Wired Child: Reclaiming Childhood in a Digital Age. Richard is a leading expert in the use of persuasive design and digital media, and how that affects children’s health.

[Richard Freed, PhD] Good day, everybody. Thank you Mitch, and thank you Pam and Children and Screens for having me here today. I’m going to take a different tack than probably we’ve already been heading, I want to have us potentially consider just the history of, you know, we have emerging technologies now, we’ve had emerging technologies for a long time, and kind of where has that led us? Back in 1950 or so, we had a psychologist, B.F Skinner, tell us that, you know, this was the emerging technology …that we’re gonna have uh machines replace uh human teachers because they are going to be better. We heard that smartphones were supposed to be the be all and end all, and that kids were going to have this amazing computer in their pocket, but we actually know that that’s dramatically increased kids entertainment screen time and therefore likely hurt their learning success. We’ve had Mark Zuckerberg introduce the Zuckerberg and Gates-funded summit learning, which is a screen based program. Which in 2018, unfortunately, a lot of these programs are or new technologies are rolled out on less advantaged kids, and this was rolled out in one of the places at a Brooklyn high school, and in 2018 a lot of the high school walked out on this program. Saying “We would like to be provided the same schooling, the human-based schooling: the small classrooms, real humans teaching us, than the screen of that’s provided by summit learning, we’d like to have that like the affluent kids get.” What’s, where are we headed? This is the latest offering from Meta, the Metaverse and so forth, you know this is Facebook or Zuckerberg saying, like “the Metaverse is gonna be this new evolution of social connection.” I think we heard that, with respect to Facebook and Instagram, and unfortunately those kids who use more social media are ending up more depressed, looks like body image disturbance and eating disorders. And you know, kids are, this this is people are supposed to interact in this virtual space on the bottom left, like they go to meetings. Do we not have kids disappear to back rooms on fortnight and engage in already a virtual world at the expense of the real one? What is, Mark has talked about that this isn’t going to supposedly increase screen time, but we’ve… every technology has not displaced as we introduce smartphones and tablets and all these other technologies, they don’t displace the other …they’re additive. They displace a little bit, but they’ve just essentially added to the technology. So what do we actually hear about? Mark tells us immersive all-day experiences will require a lot of novel technologies. I think a lot of parents are concerned about screen time already, and see the destructive effects of it. Is this where we want to be going? Mark Zuckerberg tells us the ultimate goal is true augmented reality glasses, we have to fit hologram displays, projectors, batteries, radios, custom, silicon, chips, cameras, speakers sensors to map the world around you and they’re all going to put those in glasses. So kids already with a smartphone pay any attention to you as a parent? And lots of teachers are saying kids aren’t learning because they’re already on a smartphone, that they can sit there and watch youtube or the iteration of it, a few years hence it’s going to even be more compelling, or TikTok while they’re supposedly listening to a teacher and the teacher has no idea. I think we should actually consider the reality, again, these were all promised as emerging lovely technologies for our kids. What’s the reality? Today what they do is they end up having kids sit before entertainment consumer-based technologies, the latest research is eight hours and about 40 minutes for the typical teen today. That is not affected, or you know, that really is not spread out equally, I work with less advantaged kids. Sure, higher income kids get a lot of this. Lower income kids are really spending their lives with it. The kids in my practice, um sure white kids spending way too much time with it, at seven hours and 50 minutes. Black and latino kids, essentially 10 hours a day in a sedentary activity, in this amazing… supposed amazing in quotes, virtual world where they sit there and oftentimes are exposed to lots of marketing pushing them to eat junk and fast foods. So what’s the reality of, you know again, we had been promised about the, the wonders of smartphones etc, what social media was supposed to do for kids, what’s the reality of what is happening? Because of a remarkable increase in sedentary activity, and a lot of marketing, we are seeing an explosion in child obesity. Researchers thought it couldn’t go any higher, but it keeps going up, and it’s not white kids who are so much getting affected. It is the less advantaged kids who are treated to all this screen time. It’s, we’ve seen in Dr. Jean Twenge’s research a dramatic increase in depression, suicidality and self-harm. Why is that? Because I believe kids should not be living their lives with machines. They should be… it displaces the real world. We’re also seeing in spite of a lot of promises, that when kids spend more screen time which tends to be entertainment based, they don’t really spend it learning, whether at home or at school. The research is pretty solid that kids don’t perform as well. Is there a better way? Can we think about, as we look at technologies going forward, what can we do? Whether they be, I think as we’ve seen, you know, the promise of new technologies, the higher up in tech really aren’t kind of going for this. The very top echelons, those in silicon valley are not you know they roll out these technologies but they do everything they can to keep their kids off of them. What did Steve Jobs tell us about tech in schools? “I used to think that technology could help education. I’ve probably spearheaded giving away more computer equipment to schools than anybody else on the planet, but I’ve come to the inevitable conclusion that the problem is that no one, that is not one that technology, can help solve. What’s wrong with education cannot be fixed with technology. No amount of technology will make a dent.” I think that’s a different message than we hear, in what is supposedly going to be rolled out. When, when the top tech elite send their kids off to school, they often send them to places like where Mark Zuckerberg himself went to school, which is Phillips Exeter Academy arguably always in the top five of our nation’s high schools, and it’s in New Hampshire and if you have about 60 grand a year you can send your kid there. My family does not, but the primary learning program or the situation at Phillips Exeter is the Harkness method. And you’re looking at it here: it’s it’s about 12 students sitting around a wooden table, generally without technology, talking with one teacher. And this is the kind of school that raises, I believe, future leaders that that can actually interact with one another, that look you in the eye, that develop their own thoughts, that are not being caught up in persuasive design, which is… i want to talk about. Why do… why are the these very top folks at the at the tech industry not..be less sanguine about handing their kids technology? I think they know what’s inside of it, and that is AI. That is um, Ii wrote the first major media article on persuasive design, and how that affects children how that’s negatively impacting their health and well-being. These top tech elite understand that, they are understanding that these devices are built to manipulate kids. My colleague Susan Linn said the the best toy is, I think she said about 90% child and about 10% toy. Like, you I’m worried that all this AI stuff is really, it’s all about the computer and less about the kid. How does, how does a persuasive design work in all these technologies? We have psychologists, and that really worries me, I’ve worked with colleagues, to have psychologists… I think psychologists are developing the metaverse what are they doing? How are they going to… how are they influencing or manipulating a kid? I’ve raised concerns about that and addressed that with the APA. Here’s somebody, a person trained in psychology, what is his intention as a game designer? He says, if game designers are going to work to pull a person away from every other voluntary social activity or hobby or pastime, they’re going to have to engage that person at a very deep level in every possible way they can. So this was Bill Fulton who works at Microsoft’s games user research group, again a lot of psychology, psychologists or people trained in that, and I think that’s what we’re going to have: essentially looking to pull kids onto their platform, away from real life. So how can we help all families through this? Kristen Secker who was, in a New York Times article, talked about why are all these parents in silicon valley not caught up in this? Why are, why are we less using this? And she said, is it information like we have a lot of insider information or is it privilege? And the answer is it’s really both. These parents have information, they have privilege. I think we need to be, as health experts, imparting that to families: bringing that science, the cornerstones of childhood, you know. I think we get seduced by machines, but the cornerstones of childhood and the tech elite know this, are they’ve always been, they always will be: family number one, even if you’re a big corn fed 17 year old young man, the family should be your number one connection in your life. I think a lot of we see a whole generation of kids losing touch with their families as they live in back rooms on devices, because of persuasive design. And the second most important connection is school, and that includes real life teachers. Real life teachers are super important, at, you need to connect with those people so you don’t get, you need to feel at home and connected so you don’t get, that keeps kids from being suicidal truthfully, uh and depressed…. which we have so I never I don’t think we should ever move away, we need to emphasize human teachers like the affluent kids get. We need to understand that this is not a fair fight, for kids and families, because this is a super powerful industry. Working with, my families have less health literacy, they have less uh privilege, than a lot of parents in silicon valley ,so we need to help everybody out how can we do that? I testified yesterday in California legislature to promote or kind of put forward AB2408, a bill that would silicon valley from addicting kids with all their AI and persuasive design. I think we need to have essentially, science, we need to have, I hope psychology moves towards this, to help families understand that family and school should be number one, not machines. And I’m hoping that schools understand that they have a responsibility to look at the science. I think there should be an ethics code for emerging technologies, that’s based on an actual child’s need, not corporate profits. We don’t use persuasive design to increase time on devices, it should never distract kids from the cornerstones of childhood: physical activity, kids are largely sedentary today, how are more devices supposed to help? Family and school, we should never use persuasive design to direct kids towards harmful content, which happens on places like Tiktok and social media, the platforms that have been promised to help kids- because kids are exposed to a lot of eating disorder content etc. We need to demonstrate any new tech should come out it should prove that it’s about learning, and doesn’t increase kids net entertainment based screen time- that it actually should probably decrease. We need somebody to decrease the time that kids spend on this, it’s sanctioned and supported by truly objective bodies, not people who are getting a check for this. And it should never be tested first on less advantaged kids, which typically happens in places like when when programs like summit are rolled out. Thank you.

[Dr. Mitch Prinstein- Moderator] Richard, thank you so much. That’s really helpful and powerful. I want to rush us right away to Seth Bergeson, who is a fellow at the World Economic Forum and a consultant at PWC, focusing on AI governance and ethics for children and youth. Seth is also a member of UNICEF’s expert advisory group on AI for children. Thank you Seth, excited to hear what you have to say.

[Seth Bergeson, MBA, MPA] Thanks so much, Mitch and thank you to all the panelists. It’s really, really exciting to be here today, and hear everyone’s perspective. So I’m gonna try to keep this quite brief, so we can get to the question and answer portion. So really, I want to talk to all the parents and guardians and adults on the call today, about how you can think about using artificial intelligence technology, and introducing it into your family, if you so choose. And that’s a very complex question. So, just moving forward, we’ve heard from the panelists a lot about where AI is around children today, and I really like Hod’s, you know, concept of AI being invisible because it is often embedded in a lot of devices that you just don’t even think about using in your household these days. So these are things like smart toys, which we’ve heard a lot about ,which often look like you know robots. There are smart speakers like Amazon Alexa. There was a kind of an alarming near incident at the end of last year where an Amazon Alexa speaker told a child who asked for a challenge that she could do to uh put a penny next to an electrical outlet, and see if she could do that without electrocuting herself the parent who was involved intervened, and amazon addressed it. So there are you know, certainly physical and psychological risks, as far as speakers introduced. There’s the world of broadcast media- youtube, youtube kids there’s social media, which you’ve heard a lot about especially TikTok with its very addictive algorithm. There’s education technology, you know which a lot of schools use, a lot of parents have used that during the pandemic to supplement the learning that their children are missing out on at school in a traditional sense. And then there’s that you know the virtual world… um we started talking about a virtual world two or three years ago when i started this research, and now it’s really expanded into this Metaverse.I’m thinking about things like minecraft, animal crossing, and facebook’s offerings going forward. So I specialize and I really focus on smart toys, or these AI enabled toys that we’ve heard a little bit about, and my favorite example to talk about why parents should care about smart toys um is a smart doll named my friend Kayla that came out in 2017. And it was found to illegally spy on children and be very vulnerable to hacking, any adult, or any user who had this the my friend Kayla app on their phone could essentially hack into a my friend Kayla doll that was 30 feet away, so your child could be talking to my friend Kayla in the bedroom, a stranger could be out on the street using their app and talking to your child through the my friend Kayla doll. This was very very alarming, the nation of Germany actually banned the toy, required parents not just to throw it out but to destroy it, to turn it over to trash collectors, who would give them a certificate of destruction if not, they face fighting time in prison. And I think that a lot of toy makers, as well as parents, think that you know toys are generally fairly innocuous as long as they don’t have choking hazards, as long as the paint isn’t harmful, as long as it doesn’t have sharp edges. And often people think that you know in this, in this tech narrative that you add technology to a toy, what’s the worst thing that can happen? And my friend Kayla does show very real risks to smart toys. So what we’ve done at the World Economic Forum as part of our Generation AI project, and we really see this this era and generation of children as being surrounded by AI, and being generation AI, is first and foremost we advance to design principles that companies could use when they’re designing AIi for children and youth. And we want these companies to put children and youth first- designing technology that’s first and foremost fair, inclusive, responsible, safe and transparent. So these are the design principles, and really we’re advancing these governance guidelines to help companies create ethical and responsible AI, so as as parents when you are reading about AI technology, when you’re reading about these smart technologies, you can think about these these principles “fair, inclusive, responsible, safe and transparent,” and I’ll share the link to our toolkit as well with you. The other thing that we did in our recent publication of the toolkit at the World Economic Forum, is propose a labeling system that companies could use to really succinctly communicate to to parents, adults, consumers even children and youth how their technology use AI and how they can be used appropriately safely, responsibly and ethically .So these six categories we think are very key: age, data, use sensors, accessibility, use of AI and networks. And in our publication, we have a guide for parents and guardians, that you can use to really interrogate the technology that you’re thinking about buying and curing for the children and youth in your life. If you’re a teacher and thinking about introducing this in the classroom, you can think about this too. So the first category is age, you know what age is it designed for? We’ve been thinking about this for toys for a very long time, but we also think parents should go a little bit further and think about the developmental state, that’s appropriate for children and youth, because children develop at very different rates and it’s not always linear. The second is accessibility. You know, AI should be designed that all children and youth can use them, regardless of any kind of physical or mental differences or disabilities, that’s not always the case. And because AI is trained on large amounts of data, the datas can have bias in it, and it’s not always representative. There’s a very large risk that the AI won’t work equally for all people and all children, and that’s something that you should think about too. The third category are sensors, so these are things like cameras, microphones and other sensors. A lot of these robots and smart toys and smart dolls will use facial recognition to actually recognize a child, voice recognition to understand that the brother is in this corner, while his sister is over in that corner. And you should understand as a parent, if you can turn off some of these sensors, because you might not necessarily want a smart doll sitting on a shelf listening to your family dinner conversation, similarly you might not want smart speaker recording your dinner conversations and potentially analyzing it. Moving on to the fourth category, there are networks. A lot of these toys, they are all connected to the internet and a lot of them allow children to connect to other children, and users. That’s obviously a risk, as I’m sure a lot of the parents know. So you should figure out there are ways to play offline, where children aren’t connecting to real people. Often AI’s as we’ve heard from the other panelists, can have a black box effect, where it’s very hard to understand how the AI itself is actually working? You know, even for people who are somewhat technically savvy. But you should try to understand how the technology is using AI, is it using a technology like facial recognition? Which can be very powerful, but also quite problematic. We know for example that facial recognition is very bad at identifying people with darker skin tones, people of color, women, and girls wearing head coverings or headscarves, so you want to understand how that technology works. And lastly is data use, as I’m sure a lot of parents on the call right now know about, is you want to understand where that technology… how the technology is using the data and where it’s being stored. On the My Friend Kayla doll that I talked about a little bit earlier actually sent the data to the toy company but also to a third party and it wasn’t… the toy company did not make this very clear parents. And because of the terms of use, and in terms of service that parents agreed to by using the toy with children, that third party can do really whatever they want with data. So you should certainly make sure that your child’s data is secure. Another initiative we did at the World Economic Forum was called the Smart Toy Awards, we did this last year, you can learn more about it at smarttoyawards.org, we actually gave awards for innovative, responsible and ethically designed smart toys. You can see the eight toys here. A few of them are robots, a lot of them are education technology, some of them like the Intellino Smart Crane, does not interact with kids, it actually teaches um computer coding which is really exciting. These toys we’ve vetted, and think are ethically and responsibly designed and we hope toy companies will follow suit. Lastly, my colleague, Kay Firth-Butterfield and I wrote an article too, wondering, you know, do we want toys to be our children’s best friends? And and I think that’s a very fundamental question, I think there’s a very important role for analog toys in children’s lives, and some families might decide that smart toys are not right for their children, or not right for children at this stage of their lives, and ultimately we want to make sure data collected on children should not be used to discriminate against them in the future, as they become older, apply to college and even as adults. So in conclusion, what can you do as parents and guardians? Really I think the vision for the future that I like is an AI augmented childhood. We should not replace analog toys, AI is appropriate for some children, not for all children, depending upon your family. We certainly don’t want AI to replace teachers in the classroom. So really, it’s about AI augmenting childhood and other aspects of this life of your life, your family’s life. So as parents you can educate yourself, you can learn about products and consider them critically before buying one, and try to demystify this black box. You can use the six categories included in our guide, and guardians, and lastly you can teach your children why their privacy matters, why their data matters, and how they can use AI responsibly. And also really importantly, teach children that the AI has limits, and it can be wrong. And it’s often wrong. AI at the end of the day is just a computer program, it’s neither right nor wrong. It’s just based upon the data that you feed into it. So these, we think, are very important lessons. And I look forward to discussing this with my fellow panelists.

[Dr. Mitch Prinstein- Moderator] Thank you so much Seth. This is absolutely incredible, to hear all these talks and to hear them together, and the ways they really connect is fascinating, if not slightly horrifying at the same time. I wanted to start with a question for Hod, to for Hod talking about artificial intelligence and robotics, the things that are coming are amazing to see what’s possible now. But has anyone started to study, or do you know studies underway, to look at some of the psychological concomitants? The possible impacts of this on child development?

[Hod Lipson, PhD] So the answer is, this is a great question, and the answer is yes and no. So I think there are definitely big, we begin, there are studies being launched, but the technology is moving so fast, in my opinion, that it’s sort of hard to, for sort of, current academic, sort of cycle, to keep up with this. You know, as I said just a few years ago, AI was in a place where a lot of in these things that we’ve seen today, couldn’t have been made. Now they can, and we’re all catching up, so the answer is yes, some studies are being launched. The panel today here is amongst the people who are actually looking at many of these questions, in very deep ways, but in general I feel there is a little bit of almost, a dichotomy in how academia, at least, is looking at this. You have engineers that develop these technologies, looking at different kinds of metrics of performance. You have psychologists, you have other people in humanities looking at these effects, but perhaps not necessarily well versed in the technology, frequently under underestimating or overestimating its power, or seeing different angles, or not understanding how technology, actually its weak points and its strengths. And that, we sometimes, I feel we are talking past each other in sort of the nuances, of how this needs to be looked at. So the short answer is, a little bit of work has been done, but there’s a lot more to do to keep up with the rate of progress. And more people need to sort of understand how the technology works in order to really engage in this debate.

[Dr. Mitch Prinstein- Moderator] Thank you. You know, I wonder if I’m the only one, who after hearing these talks, is considering building an underground bunker to move in… with my kids for the next 20 or 30 years. It sounds, you know, pretty scary to think about, you know, the ways in which, not only our kids, from Rachel’s presentation, trusting and interacting and humanizing their interactions, but from what we’ve heard about AI, those um those can be designed to actually fool anyone. And to really create an anthropomorphized experience that would fool even mature adults. Which makes me think about the conversations around ethics. Justine, I’d love to hear a little bit about, some of the ways in which you made the decisions that you made, to really ensure that you were creating that artificiality. Seth, I think that this also really links to some of the things that you’re talking about, as far as ethics, and Richard some of the ways where if ethical principles are not being followed, there might be legislative options to really try and ensure that there are protections in place. Justine, do you want to start us off by just talking a little bit about some of those ethical considerations?

[Justine Cassell, PhD] Sure, so the ethical considerations, so first of all, and I wrote this in the comments, I think it’s really worthwhile to look at the work by Bacia Friedman and her colleagues at the University of Washington, on value sensitive design. That’s a framework that I think takes a very honest, very solid, and very future oriented perspective on how we design technologies today, and how we need to design them tomorrow. And I have to say that I’m guided by my values, which is to help children live healthy lives, and to develop to be the to be the healthiest they can be, in every way and to fulfill all of their dreams. I mean this is kind of obvious, I’m lucky however, I have to say this that I don’t have to make money from my technologies. And that allows me to live according to my ethics. So my virtual peers always say oh things like “oh I’m sorry I can’t do that but I’ll tell my engineers that they should think of putting that in next.” So they always reveal the fact that they are flawed, that they are technologies and not real. But all the work that I’ve done with children shows that they actually think that adults are suckers for thinking they’re real. In fact when we ask children, whether they think they’re real, that’s their opinion of what’s going on. But nevertheless, we really pay attention to on the one hand making them flawed, making them not like real children, and on the other hand, making them visible to children; making their guts visible to children. And I want to go over this again and again, because one of the things that worries me about the place that we’re at today is that we seem to have this schism between parents who embrace technology wholeheartedly, and parents who prevent their children from using the newest technologies. And I think both of those are equally dangerous: I think that there’s a third path, which is choosing technologies that children can take apart, and finding ways to work with one’s children to take them apart. And I gave the example of breaking Alexa, there’s a really wonderful, uh you might find you might all find this fun, there’s a lot of videos online from kids explaining how to hack alexa. And I personally think that that is wonderful, because what they’re learning, what they’re sharing with other children, is that this is something they can control. This is something that they can make work, the way they want it to work. You can have positive opinions or negative opinions about children hacking. But these are eight and nine-year-olds discussing how Alexa works, how Siri works, and I think we know that technology-aware parents have children who use more technology, and are more familiar with how it works. I’d like to see every sector of society have the belief that this is something that we can work with our children on. In the same way, researchers on television showed that co-watching was the best way to make children, the best way to make television positive for children’s development. Co-using technology is really important.

[Dr. Mitch Prinstein- Moderator] That makes perfect sense. Yes, thank you. Seth, other thoughts about ethics, as we kind of enter this brave new world?

[Seth Bergeson, MBA, MPA] Yeah, absolutely Mitch. I think it’s really fascinating, and I actually think the panel’s, probably, we probably all agree on more than we disagree about, I think Justine’s point too is very good… you know something else we’re doing at the you know the World Economic Forum is talking to young people around the world about how they feel about technology. And while we know technology itself is, you know, neither good nor bad, I think young people around the world do have a pretty nuanced view of technology. They don’t necessarily use, especially if they’re younger people, the same kind of vocabulary and terms that we would use in these, you know, more academic and professional circles. But I do think they do see a lot of the limitations of technology, in a way that we as adults don’t necessarily always see. I do think there’s certainly a role for companies to really lead on this, and I do think you know the the profit-driven motives do complicate this a little bit too, but we really aren’t in this you know brave new world, as we’ve we’ve heard other people saying right now, and it’s going to be very important I think for for companies to think about, you know, what can go well? And what can go wrong? I think we’ve seen a lot of really great impact that AI can have in children’s lives, and you know education technology, especially when a lot of kids around the world weren’t able to go to school, for you know months or years in some cases. And we also see, I think, a lot of, you know, therapeutic effects for children with Autism, as we’ve heard about too. So I think the technology can hopefully be a net positive, that’s what we’re really trying to do, is maximize the benefits while minimizing the risks. But the risks are, you know, very complex and multifaceted and as I think when we think about the metaverse too, that’s really like, you know, AI and emerging technology on steroids, so it’s going to get much more complicated, and I think we need a really you know proactive and creative design. And the last thing I’d say too, is in this design process, we really need to engage children and youth. Not just to, you know, test the technology, but really to see you know how they can break it. What kind of things can actually happen? So hopefully companies are doing that, and will continue to do so in the future.

[Dr. Mitch Prinstein- Moderator] That’s helpful. Richard, we live in a capitalist society that has a first amendment. I know you’re talking about policy implications, and I won’t ask you the attorney question about what might be possible? But as a psychologist, let me ask you- many parents have their kids involved, rather than waiting for our legislators, or our tech companies to make the the decisions that might be the safest, parents have a role here. And a lot of parents report that they have their kids buying these German spy dolls, or getting their kids on social media, because they don’t want their kid to be the only one who doesn’t have it. And they feel that pressure. What are you telling families who are struggling to make the decision about whether or not they should let their kids engage with the current or potentially future technology?

[Richard Freed, PhD] I sure hope that we have leadership with respect to schools, and so forth and community organizations, that don’t make it such that your kid is the one kid that doesn’t have the “such and such” smartphone device. I think we take the model of private schools that really, most private schools are more, they’re more likely to have kids not bring their smartphones to school, and therefore those kids are better able to learn and better able to engage with their teachers. So I’m hoping, um yeah, you should never be the one kid in sixth grade, let’s say, who doesn’t have their phone. That whole middle school should make sure every kid is off their phone, and actually learning and engaging with their teachers.

[Dr. Mitch Prinstein- Moderator] That makes sense. So it sounds like the school has a responsibility, as well. Rachel, as a parent and as a psychologist, it’s so interesting to think about kids trusting devices in ways that are different, if not more trusting of devices, than than humans: their therapists, their parents. Why do you think that kids are more likely to disclose?

[Rachel Severson, PhD] Well that research was with adults, so I don’t think we know that with kids as of yet. But we do know that Cynthia Brazil and colleagues have some research that kids will learn equally well novel information from a robot, as they will a person. And so it’s, that suggests that kids are trusting these technologies, but I think i think it’s in certain domains. And at least that’s all we can say for sure, now based on the research that we have. And it’s an open question, whether children will extend into other domains, you know into to social, moral domains? I think that with the research with adults and disclosing more to a virtual therapist, may have something to do with kind of eliminating that social judgment that may that people may feel even from their therapist, when disclosing information, you know, about about their personal lives. And maybe they’re viewing these technologies as being a more neutral partner in discussing these things. I know that, I mean it’s a really interesting space and thinking of their examples of an AI, a virtual entity, called Replica I think it is, where people can, you know, it can be there, a person to talk with. And I think that those are really interesting spaces, and really beg the question of what’s happening? Or what isn’t happening in human-human interaction that people are needing to seek those out? But also on the flip side, are there ways that these technologies could be developed that could be, you know, that perfect partner in having those discussions? And I’m not advocating for either of those views, but I think it’s a really interesting question for us as a society to consider.

[Dr. Mitch Prinstein- Moderator] Thank you, Rachel. You know, another question I think for us to consider as a society, has to do with our adults’ own relationship with these different technologies. Many of us, you know, now, or shortly after we finish this panel, are going to interact with technology. You know, and we’re going to rely on it, and we’re going to be grateful that it’s telling us how much traffic for the next meeting we have to get to without us even asking it, to make those computations for us. What is the conversation that we as adults need to have, to recognize that there is a new force in our lives through AI and technology? What is the relationship we want and what are we modeling to our children in how they see us interact with this technology? Because that is an incredibly powerful signal that we’re all complicit in communicating, you know, with our children, whether we realize it or not. What is that, what does that conversation look like, Hod? You know, what is the way that we want to be engaging with artificial intelligence or robotics? I don’t know that any of us have consciously confronted that for ourselves yet.

[Hod Lipson, PhD] Right I think there’s, the one of the the things that that kind of concerns me the most, is that, it is the technology is accelerating. And it’s very difficult to foresee what would be possible in five years; if i have to think about the the biggest sort of concern for me is that AI will be able to be at the point where it can hold an open-ended conversation with a human. Right now, the conversations the chatbots are fairly restricted to topics, and and dialogue trees and things like that, but but when you look at open-ended conversation, when that happens and and it will happen eventually, at that point it will be easier for a child to have a virtual friend online, rather than a human online. We’re already troubled by kids having friends online, but when they’re, at least they’re human on the other side. But when a child can have a conversation with another AI in an open-ended way, rather than speak to another human, I think that will be a big transition in how humans interact with humans. And that, when that will happen, and uh it could be around the corner. So I think so how do we deal with that so the the challenge that I see the most is how do we begin to address these future issues, rather than looking at you know what kind of technology do today, let’s launch a program to address today’s technology, whereas technology is accelerating and what it can do in the next five years is really what we need to focus on.

[Dr. Mitch Prinstein- Moderator] Thank you. Anyone else with a final comment on that same idea, that same question of, where do we go from here? And what do we need to reconcile in our own minds as adults?

[Seth Bergeson, MBA, MPA] I can just build on Hod briefly, I think that the understanding of reality is really interesting, you know, when I was a kid 25, 30 years ago, like, my GI Joe was clearly not human. But I think in this you know, metaverse, robots, that that line is going to blur quite a bit. And I think, you know, we as adults don’t necessarily know where we draw that line, I also don’t know if, you know, we I think we can draw that line partially, but I think we also need to engage children and youth in that conversation, and figure out, you know, how they want to. And I think childhood is really fundamentally going to change along with aspects of play and how we socialize, so it’s, I’ll agree with Hod that it’s very, very hard to predict but I think we need to be having these, you know, multi-stakeholder discussions and thinking really creatively about it, and engaging the children in youth voices as well.

[Hod Lipson, PhD] I’m encouraged though, to hear from Rachel and Justine, how much kids can tell the difference between reality and between machines and software and humans and hopefully they’ll figure it out. Just hope.

[Justine Cassell, PhD] I think that’s true, because we do have to look back at history and remember that, well we don’t remember it personally, but we know that priests said that the printing press was going to destroy religion, because priests wouldn’t have the corporal experience of writing out religious texts by hand. In the 1940s, families were really worried that radio was going to destroy the family, and there too it didn’t. Same with television. And this is not going to destroy us either, because we work with what’s around us. And I think we’ll continue to, and I think we sometimes forget how capable children are. Perhaps more than we realize.

[Dr. Mitch Prinstein- Moderator] Thank you for that. I’m going to turn it back over to Pam.

[Dr. Pam Hurst-Della Pietra] Well thank you very much Mitch, Hod, Richard, Rachel, Justine and Seth for a fascinating look about future technologies and advice on how to move forward with children in mind. Thank you to our zoom audience as well, for taking out time out of your busy schedules to attend this session. I hope that the information wasn’t too disturbing, and that each of you leaves the webinar today with hope for tomorrow, and a commitment to helping create a better world for children and families. Until then, live long and prosper.