ChatGPT can now warn a friend or family member if it believes a user may be in danger

ChatGPT app displayed on a smartphone as OpenAI introduces a new trusted contact safety feature.

OpenAI has added a new trusted contact feature to ChatGPT. Credit : arda savasciogullari, Shutterstock

OpenAI has introduced a new ChatGPT feature that allows users to choose a trusted person who could be alerted if the AI believes they may be facing a serious safety risk. The system lets adult users select a friend, relative or caregiver who may receive a notification if ChatGPT detects conversations suggesting the person could be in crisis or at risk of harming themselves.

The new option is already attracting attention because it changes the role ChatGPT can play during deeply personal conversations. While many people still mainly use AI for work, studying or everyday questions, OpenAI says increasing numbers of users are also turning to ChatGPT during difficult emotional moments or periods of personal stress.

The company says the new feature is designed to provide an additional layer of support rather than replace professional mental health care or emergency services.

How the new ChatGPT trusted contact system works

The feature is called Trusted Contact and can be activated through ChatGPT settings by adult users.

Once enabled, users can choose someone they trust who could potentially be contacted if ChatGPT identifies signs of serious danger during conversations.

According to OpenAI, the system relies on automated safety monitoring already used to detect discussions linked to self harm or situations where a person’s safety may be at risk.

If the AI detects language suggesting a severe concern, the conversation may then be reviewed by trained members of OpenAI’s safety team.

If the situation is considered serious enough, the trusted contact could receive a notification encouraging them to check on the user and offer support.

OpenAI says the notification may arrive through email, text message or app notification if the trusted contact also uses ChatGPT.

The company says the idea is to help reconnect people with someone they already know and trust during moments where they may feel isolated or overwhelmed.

The feature is optional and will not activate automatically. Users remain responsible for selecting their trusted contact and the chosen person must first agree to take on the role.

After being selected, the contact receives an invitation explaining how the system works and has one week to accept it. If they refuse, the user can choose another person instead.

Why OpenAI says more people are having personal conversations with ChatGPT

OpenAI says the update reflects how people are increasingly using AI assistants in more emotional and personal ways.

In a statement published on its blog, the company explained that many users turn to ChatGPT not only for information or productivity tasks, but also to think through personal issues, stressful situations or emotional difficulties.

That shift has created growing debate around how AI should respond when users appear vulnerable.

Some people see chatbots as useful companions during lonely or difficult moments. Others worry that people may begin relying too heavily on artificial intelligence for emotional support instead of seeking help from real people.

OpenAI says ChatGPT is designed to respond empathetically while still encouraging users to seek professional support and human connection where necessary.

The company insists the new trusted contact system is meant to strengthen those real world connections rather than replace them.

ChatGPT will also continue directing users towards emergency services or crisis helplines when appropriate.

The new feature builds on safety systems already used for younger users, including parental safety notifications. But applying similar ideas to adult conversations raises much bigger questions around privacy, trust and how much involvement AI companies should have when users appear emotionally distressed.

The new feature is likely to divide opinion

Some people will probably welcome the idea of a trusted friend or relative being alerted during a serious crisis.

For users who live alone or struggle with isolation, knowing someone could potentially be notified may feel reassuring rather than intrusive.

Others, however, are likely to feel uncomfortable about the idea of personal conversations with an AI system being analysed closely enough to trigger human review and outside notifications.

Even though OpenAI says trained staff only review conversations when severe safety concerns are detected, the feature is already likely to raise wider questions about privacy and how AI moderation systems operate behind the scenes.

There is also the difficult question of interpretation. Human emotions are complex and conversations are not always straightforward. People often express frustration, fear or dark humour online without necessarily being in immediate danger.

That means the accuracy of AI based safety systems will probably remain under close scrutiny as features like this become more common. OpenAI has not presented the system as a replacement for therapists, doctors or emergency support services.

Instead, the company describes it as an additional safeguard intended to help people reconnect with someone they already trust during difficult moments.

Still, the launch highlights how rapidly AI assistants are moving beyond simple digital tools.

For many users, conversations with chatbots are becoming far more personal than companies originally imagined only a few years ago. And with new features like Trusted Contact, the line between artificial intelligence and real world support systems is becoming increasingly blurred.

Written by

Farah Mokrani

Farah is a journalist and content writer with over a decade of experience in both digital and print media. Originally from Tunisia and now based in Spain, she has covered current affairs, investigative reports, and long-form features for a range of international publications. At Euro Weekly News, Farah brings a global perspective to her reporting, contributing news and analysis informed by her editorial background and passion for clear, accurate storytelling.

Comments


    Leave a comment

    Your email address will not be published. Required fields are marked *