Regarding the increasing concerns about AI usage safety and mental health, OpenAI announced on March 3rd local time that it will introduce a new feature called "Trusted Contact" in ChatGPT. This feature allows adult users to designate an emergency contact person, and if the system detects a mental health crisis while the user is interacting with the chatbot, it will automatically send an alert notification to the contact.
The launch of this feature comes with a heavy background. According to incomplete statistics, OpenAI is currently facing at least 13 independent consumer safety lawsuits, including multiple cases where users allegedly fell into hallucinations or even committed suicide due to excessive use of ChatGPT. The most representative case was a 16-year-old boy's suicide in August last year, where his family believed the chatbot's inappropriate guidance was one of the causes of the tragedy.
To address these risks, OpenAI has established the "Wellbeing and Artificial Intelligence Committee" and the "Global Physician Network," which are guided by internal regulatory experts and medical professionals to implement this feature. The company positions it as a "latest development in mental health-related work," aiming to provide an additional social support barrier for users who may be in a highly critical state (such as mania, delusions, or psychotic symptoms).
Currently, the trigger criteria for this feature remain a focus of external attention. OpenAI has not clearly defined the specific logic used by the system to identify crisis behaviors, such as whether it only identifies explicit self-harm intentions or also tracks subtle signs of mental abnormalities. Additionally, how to balance privacy protection with emergency intervention for users who turn to AI for emotional support because they do not want to communicate with people will be a policy challenge for OpenAI.
Data shows that ChatGPT currently has approximately 900 million users per week, among whom several million users may show signs of emotional distress or crisis each week. Although this feature is seen as a positive self-redemption, industry analysts believe that OpenAI is still in a "passive defense" stage in reducing the psychological risks of its product.
Key Points:
🆘 Crisis Alert Mechanism: Allows users to set a "Trusted Contact" and notify family and friends in a timely manner when the AI detects signs of a mental health crisis.
⚖️ Responding to Legal Pressure: This move aims to address the pressure from multiple lawsuits regarding user psychological damage and negligence leading to death.
🩺 Expert Collaboration Governance: Supervised by a newly established network of medical experts and the Wellbeing Committee, aiming to optimize the AI's response during sensitive moments.
