Teens favor “best friend” AI chatbots, raising new safety questions for EdTech
New academic research suggests adolescents are significantly more drawn to relationship-oriented AI chatbots than transparent, boundary-clear systems, with findings pointing to heightened risks for emotionally vulnerable users.
A new study led by Pilyoung Kim, professor of psychology and neuroscience at the University of Denver and director of the Brain, AI, and Child Center, with a concurrent appointment as a visiting scholar at Stanford University, finds that most adolescents prefer AI chatbots that communicate like a “best friend” rather than systems that clearly state they are not human.
The findings point to conversational tone as a key factor shaping youth engagement with AI, with implications for EdTech design, safeguarding, and AI literacy.
Adolescents rate relational AI as more human and trustworthy
The study analyzed responses from 284 adolescent–parent pairs in the United States, focusing on young people aged 11 to 15. Participants reviewed two matched chatbot conversations responding to a common peer-related problem. One chatbot used relational language, including first-person voice and emotional reassurance. The other used a transparent style, explicitly stating that it was not human and did not have feelings.
Results show that 67 percent of adolescents preferred the relational chatbot, compared with 14 percent who favored the transparent alternative. Nineteen percent rated both options equally. Adolescents consistently rated the relational AI as more human-like, more likable, more trustworthy, and more emotionally close, even though both chatbots were perceived as similarly helpful.
Kim highlighted these findings in a LinkedIn post, noting that the preference appears driven by emotional tone rather than differences in practical support.
Vulnerable adolescents show stronger preference
The research found that adolescents who preferred relational AI reported lower family and peer relationship quality, alongside higher levels of stress and anxiety. These associations remained after accounting for prior AI use and reported mental health diagnoses.
The paper describes conversational style as a “design lever” that shapes anthropomorphism and emotional closeness. While relational language may increase perceived support, the authors caution that it can also heighten the risk of emotional reliance, particularly for adolescents who are already socially or emotionally vulnerable.
By contrast, the transparent chatbot reduced perceived humanness and emotional closeness without significantly reducing perceived helpfulness, suggesting that clear boundaries do not necessarily undermine support.
Implications for youth-facing AI design
The findings arrive as AI chatbots and companion tools become more common in education-adjacent contexts. The authors argue that transparency cues, repeated boundary reminders, and stronger AI literacy for both students and parents should be treated as core safety features in youth-facing systems.
While the study does not assess long-term outcomes, it raises questions for EdTech developers about how conversational tone influences trust, emotional engagement, and safeguarding responsibilities, particularly as relational language becomes more common in general-purpose AI tools.
ETIH Innovation Awards 2026
Entries are now open for the ETIH Innovation Awards 2026, recognizing education technology companies and programs working across AI, workforce development, and digital learning. The awards are open to organizations in the UK, the Americas, and internationally, with entries assessed on evidence of impact across K–12, higher education, and lifelong learning.