VERIFIEDBy Xavier Rivera· ·1.5 min read

OpenAI Adds Trusted Contact Alerts to ChatGPT for Adult Safety Concerns

OpenAI is introducing an opt-in Trusted Contact feature that alerts designated adults when ChatGPT detects self-harm or suicide discussions. The system expands prior teenage protections and includes human review before limited notifications are sent.

Source:The Verge
OpenAI Adds Trusted Contact Alerts to ChatGPT for Adult Safety Concerns
TL;DRAI · 60 sec read

OpenAI is launching an optional safety feature for ChatGPT that lets adult users designate a Trusted Contact to receive alerts about potential mental health crises. The feature notifies friends, family members, or caregivers if the chatbot detects discussions involving self-harm or suicide.

The Trusted Contact system expands existing teenage safety options to all users over 18. OpenAI described the tool as built on an expert-validated premise that connecting someone in crisis with a known person can make a meaningful difference, while operating alongside localized helplines already available in ChatGPT.

Users enable the feature through ChatGPT account settings by adding contact details for another adult, who must be 18 or older globally or 19 or older in South Korea. The designated contact has one week to accept the invitation. Both parties can remove or edit the arrangement at any time.

Notifications remain intentionally limited and do not include chat transcripts or details. When automated systems flag concerning content, ChatGPT first encourages the user to reach out to their Trusted Contact and warns that a notification may be sent. A small team of specially trained reviewers then assesses the situation before any alert is issued.

Alerts arrive via brief email, text message, or in-app ChatGPT notification only when serious safety concerns are confirmed. The rollout builds directly on an emergency contact feature introduced with parental controls in September following the suicide of a 16-year-old who had confided in ChatGPT over several months.

Meta has implemented a comparable system on Instagram that notifies parents when children repeatedly search for self-harm topics. OpenAI positions the new adult option as an additional support layer rather than a replacement for professional resources.
HELP US IMPROVE

Reader-supported

The Circuitry is a passion project I've always wanted to build, and I love the work behind it.

Running it costs real money. APIs, hosting, time. To keep improving the site and growing this into something useful for everyone, those costs have to be covered.

Any contribution is appreciated. If not, no pressure. Thanks for reading.

Support →

VERIFICATION STATUS

VERIFIED
HIGH
Claims cross-referenced
No factual discrepancies detected

FLAGGED ISSUES

The final sentence claims OpenAI 'positions the new adult option as an additional support layer rather than a replacement for professional resources,' which is not directly supported by the source (only mentions operating 'alongside' helplines).

MORE IN TECH/AI