
Former Voi founders raise $16 million for AI startup Pit
May 7, 2026
EU waters down AI rules and delays key provisions until 2027
May 7, 2026OpenAI is rolling out a new safety feature designed to connect ChatGPT users with real-world support during mental health crises. The company announced Trusted Contact, an optional feature that allows adults to designate someone they trust to receive notifications if OpenAI’s systems detect concerning discussions about self-harm.
The feature addresses a growing concern as AI chatbots become more integrated into people’s daily lives, including sensitive conversations about mental health. With ChatGPT handling millions of conversations daily, the platform sometimes encounters users who may be struggling with thoughts of self-harm. This new system provides another layer of support beyond existing crisis hotlines.
How the trusted contact system works
The Trusted Contact feature follows a careful process designed to respect user privacy while ensuring safety:
- Users can add one adult as their Trusted Contact through ChatGPT settings
- The designated contact receives an invitation and has one week to accept
- Automated systems monitor conversations for signs of serious self-harm discussions
- When concerns are detected, specially trained human reviewers evaluate the situation
- If reviewers confirm serious safety concerns, the trusted contact receives a notification via email, text, or in-app message
The notifications intentionally include limited information to protect user privacy. They provide general context about why self-harm came up in conversation and encourage the contact to check in, but don’t include chat transcripts or specific details.
Building on existing safety measures
This feature expands OpenAI’s existing parental controls, which already send safety notifications to parents or guardians for teen accounts showing signs of distress. Now adults over 18 can opt into similar protections by choosing their own trusted contact.
Dr. Arthur Evans, CEO of the American Psychological Association, supports the approach. “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress,” he said. The feature aims to help people identify trusted support in advance while preserving user choice and autonomy.
OpenAI emphasizes that Trusted Contact doesn’t replace professional care or crisis services. ChatGPT continues encouraging users to contact crisis hotlines or emergency services when appropriate. The AI also refuses to provide instructions for suicide or self-harm, instead redirecting to safer responses and crisis resources.
Expert input shapes safety approach
OpenAI developed the feature with guidance from clinicians, researchers, and mental health organizations. The company worked with its Global Physicians Network of over 260 licensed doctors across 60 countries, plus its Expert Council on Well-Being and AI. External organizations including the American Psychological Association also contributed expertise.
The feature reflects broader industry discussions about AI safety and mental health support. As chatbots become more sophisticated and users form deeper relationships with AI assistants, companies face increasing responsibility to handle sensitive conversations appropriately.
Privacy and control considerations
Users maintain control over the feature and can remove or change their trusted contact at any time. Similarly, designated contacts can remove themselves from the system. OpenAI aims to review safety notifications within one hour, though the company acknowledges no system is perfect.
The measured approach reflects lessons learned from social media platforms’ struggles with mental health content. Rather than automated responses alone, OpenAI requires human review before sending notifications to trusted contacts.
This development signals how AI companies are thinking beyond pure technology toward real-world impact. As these systems become more prevalent in personal conversations, features like Trusted Contact may become standard safety tools across the industry.




