ChatGPT adds Trusted Contact feature for serious self-harm concerns

OpenAI says the optional setting is designed to connect adults with real-world support after automated detection and trained human review.

ChatGPT Trusted Contact setup screens showing how adults can add a trusted person for serious self-harm safety alerts

OpenAI is rolling out ChatGPT Trusted Contact, an optional feature that lets adults nominate someone who may be notified after automated detection and trained human review if serious self-harm concerns are identified.

OpenAI has started rolling out ChatGPT Trusted Contact, an optional safety feature that lets adults nominate a friend, family member, or caregiver who may be notified if automated systems and trained reviewers detect a serious self-harm concern.

The update gives ChatGPT users over 18 a way to set up a support contact in advance, while keeping crisis services, localized helplines, and emergency guidance as separate safeguards. For education and EdTech, where AI tools are increasingly used by students, teachers, and families, the rollout adds another marker for how major platforms are building escalation routes around mental health, safeguarding, and high-risk conversations.

Chris Lehane, Chief Global Affairs Officer at OpenAI, posted about the feature on LinkedIn, describing Trusted Contact as a way for AI systems to encourage connection to “trusted people, care, and offline support systems during sensitive moments.”

Adults can nominate one trusted person

Trusted Contact is available for adults who choose to set it up in ChatGPT. Users can add one adult as their Trusted Contact from settings. The nominated person receives an invitation explaining their role and must accept it within one week before the feature becomes active.

If OpenAI’s automated systems detect that a user may be discussing self-harm in a way that indicates a serious safety concern, ChatGPT lets the user know that their Trusted Contact may be notified. It also encourages the user to contact that person directly and provides suggested conversation starters.

OpenAI says a small team of specially trained people then reviews the situation. If reviewers decide the conversation may indicate a serious safety concern, ChatGPT sends the Trusted Contact a brief notification by email, text message, or in-app notification if they have a ChatGPT account.

The notification is limited. OpenAI says it shares the general reason that self-harm came up in a potentially concerning way and encourages the Trusted Contact to check in. It does not include chat details or transcripts.

Users can remove or edit their Trusted Contact in settings, and the Trusted Contact can remove themselves through OpenAI’s help center.

OpenAI positions feature as an added safeguard

OpenAI says Trusted Contact does not replace crisis services, emergency care, or professional mental health support. ChatGPT will still encourage users to contact crisis hotlines or emergency services when appropriate.

The company says every Trusted Contact notification undergoes trained human review before it is sent and that it aims to review these safety notifications in under one hour. OpenAI also states that notifications may not always reflect exactly what someone is experiencing.

Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association, says: “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress. Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.”

OpenAI says the feature was developed with guidance from clinicians, researchers, and mental health organizations. The work was informed by its Global Physicians Network, which includes more than 260 licensed physicians across 60 countries, and its Expert Council on Well-Being and AI.

Dr. Munmun De Choudhury, Ph.D., J. Z. Liang Professor of Interactive Computing at Georgia Tech and member of the Expert Council on Well-Being and AI, says: “One of AI's biggest promises is how it can foster authentic human-to-human connection and psychological safety. I am encouraged by ChatGPT's Trusted Contact feature, which offers a step forward to human empowerment, especially during moments of vulnerability."

AI safety moves further into user settings

Trusted Contact builds on OpenAI’s parental controls safety notifications, which allow parents or guardians to receive alerts when signs of acute distress are detected for a linked teen account. The new feature extends safety alert options to users over 18 who choose to add a trusted adult.

OpenAI says it has also worked with more than 170 mental health experts to improve ChatGPT’s ability to detect and respond to signs of distress, de-escalate sensitive conversations, refuse harmful requests, and guide users toward real-world support.

The rollout puts more safety controls directly into ChatGPT settings rather than leaving them only at system level. For schools, universities, and EdTech providers watching how AI platforms handle safeguarding, the next test is whether optional contact-based alerts become a broader design pattern across student-facing AI tools.

Next
Next

Trinity College Dublin and Microsoft find widening AI skills gap between Irish SMEs and large firms