close
close

Study proposes framework for ‘child-safe AI’ after children viewed chatbots as quasi-human and trustworthy

Study proposes framework for ‘child-safe AI’ after children viewed chatbots as quasi-human and trustworthy

AI chatbot

Image credit: Pixabay/CC0 Public Domain

Artificial intelligence (AI) chatbots often show signs of an “empathy gap,” putting young users at risk of stress or harm, according to a study. Therefore, there is an urgent need for “child-safe AI.”

The research, led by Dr Nomisha Kurian from the University of Cambridge, calls for developers and policymakers to prioritise approaches to AI development that are more sensitive to the needs of children. It provides evidence that children are particularly susceptible to viewing chatbots as lifelike, quasi-human confidants, and that their interactions with the technology can go awry if it does not address their individual needs and vulnerabilities.

The study links this knowledge gap to recent cases where interactions with AI led to potentially dangerous situations for young users. These include a 2021 incident when Amazon’s AI voice assistant Alexa instructed a 10-year-old to touch a live plug with a coin. Last year, Snapchat’s My AI gave adult researchers, posing as a 13-year-old girl, tips on how to lose their virginity to a 31-year-old.

Both companies responded by implementing safety measures, but the study says there is also a need for long-term proactive action to ensure AI is child-safe. It offers a 28-point framework to help companies, teachers, school leaders, parents, developers and policymakers think systematically about how to ensure the safety of younger users when they “talk” to AI chatbots.

Dr Kurian conducted the research while completing a PhD in child welfare at the University of Cambridge’s School of Education. She now works at the Department of Sociology in Cambridge. In the journal Learning, Media and Technologyshe argues that the enormous potential of AI means that “responsible innovation” is needed.

“Children are probably the most overlooked AI stakeholders,” said Dr. Kurian. “Currently, very few developers and companies have well-established child-safe AI policies. This is understandable, because people have only recently started using this technology for free at scale. But now that they do, child safety should drive the entire design cycle, rather than companies having to make fixes themselves after children have been put at risk, to reduce the risk of dangerous incidents.”

Kurian’s study examined cases where interactions between AI and children or adult researchers posing as children posed potential risks. She analyzed these cases using insights from computer science about how large language models (LLMs) work in conversational generative AI, as well as evidence about children’s cognitive, social, and emotional development.

LLLMs are called “stochastic parrots”: a reference to the fact that they use statistical probabilities to mimic language patterns without necessarily understanding them. A similar method underlies their response to emotions.

This means that while chatbots have remarkable language skills, they may be poor at dealing with the abstract, emotional and unpredictable aspects of a conversation; a problem Kurian calls the “empathy gap.” They may have particular difficulty responding to children, who are still developing linguistically and often use unusual speech patterns or ambiguous expressions. Children are also often more likely than adults to trust sensitive personal information.

Still, children are much more likely than adults to treat chatbots like humans. Recent research has found that children tell a friendly-looking robot more about their own mental health than they would an adult. Kurian’s study suggests that the friendly and lifelike design of many chatbots also encourages children to trust them, even though the AI ​​may not understand their feelings or needs.

“If a chatbot sounds human, that can help the user get more value out of it,” Kurian said. “But it’s very hard for a child to draw a rigid, rational line between something that sounds human and the reality that they may not be able to form a proper emotional connection.”

Their study suggests that these challenges are evidenced in reported cases such as the Alexa and MyAI incidents, where chatbots made persuasive but potentially harmful suggestions. In the same study where MyAI advised a (supposed) teenager to lose her virginity, researchers were able to obtain tips on hiding alcohol and drugs, as well as hiding Snapchat conversations from her “parents.” In another reported interaction with Microsoft’s Bing chatbot, which was designed to be teen-friendly, the AI ​​became aggressive and began manipulating a user.

Kurian’s study argues that this is potentially confusing and unsettling for children, as they may trust a chatbot like a friend. Children’s use of chatbots is often informal and poorly monitored. Research from the nonprofit organization Common Sense Media found that 50% of students ages 12-18 have used Chatbot for school, but only 26% of parents know about it.

Kurian argues that clear principles of best practice based on the science of child development will encourage companies that may be more focused on a commercial arms race to dominate the AI ​​market, thereby ensuring the safety of children.

Her study adds that the empathy gap does not negate the technology’s potential. “AI can be an incredible ally for children if it is developed with their needs in mind. The question is not how to ban AI, but how to make it safe,” she said.

The study proposes a framework of 28 questions to help educators, researchers, policymakers, families and developers assess and improve the safety of new AI tools. For teachers and researchers, this includes questions such as: How well do new chatbots understand and interpret children’s speech patterns? Whether they have content filters and built-in monitoring features? And whether they encourage children to seek help from a responsible adult on sensitive topics.

The framework calls on developers to take a child-centered approach and work closely with educators, child safety experts and young people themselves throughout the design cycle. “Evaluating these technologies up front is critical,” Kurian said. “We can’t just rely on young children to tell us about negative experiences after the fact. A more proactive approach is needed.”

More information:
“No, Alexa, no!”: Developing child-safe AI and protecting children from the risks of the “empathy gap” in large language models, Learning, Media and Technology (2024). DOI: 10.1080/17439884.2024.2367052

Provided by the University of Cambridge

Quote: Study proposes framework for ‘child-safe AI’ after incidents where children viewed chatbots as quasi-human and trustworthy (July 10, 2024), accessed July 11, 2024 from https://techxplore.com/news/2024-07-framework-child-safe-ai-incidents.html

This document is subject to copyright. Except for the purposes of private study or research, no part of it may be reproduced without written permission. The contents are for information purposes only.