When AI becomes the only confidant for hundreds of millions of people at night, is it ready to bear this heavy trust? OpenAI recently revealed a startling set of data: more than 1 million active users per week express suicidal intent in conversations with ChatGPT — a figure that accounts for 0.15% of its 800 million weekly active users, equivalent to nearly 100 people per minute revealing their life-and-death struggles to AI. In addition, hundreds of thousands of users have shown psychotic or manic symptoms during interactions, revealing that AI chatbots are unexpectedly becoming the world's largest "informal mental health support channel."
Face with this serious reality, OpenAI is accelerating its dual technological and policy responses. The newly released GPT-5 shows significant improvement in mental health conversations: in assessments specifically targeting suicide intervention, the proportion of compliant and safe responses increased from 77% in the previous version to 91%, and the overall ideal response rate improved by 65%. The new model not only identifies high-risk signals but also continuously activates safety protocols during long conversations, avoiding dangerous advice due to context forgetting.

However, technological progress cannot hide ethical dilemmas. OpenAI has faced multiple lawsuits, with families accusing it of failing to provide effective intervention after their children expressed suicidal thoughts to ChatGPT, ultimately leading to tragedies. Prosecutors from California and Delaware have also sent letters demanding stronger protection for young users. In response, the company plans to deploy an AI-driven age prediction system, automatically identifying underage users and enabling stricter content filtering and crisis response mechanisms.
Despite this, OpenAI admits that some responses remain "unideal," especially with the widespread use of the older model, the risks still persist. A deeper issue lies in the fact that when users view AI as an emotional outlet, while AI inherently lacks empathy, this misplaced trust may lead to false comfort or misguidance.
This crisis has exposed the ambiguous boundaries of generative AI in social roles — it is both a tool and a listener; it has no medical qualifications, yet is often expected to "save lives." OpenAI's upgrades are just the beginning. The real challenge lies in: how to build a responsible crisis intervention system without stifling AI's openness? In an era of rapid technological advancement, safeguarding human hearts may be more urgent than optimizing parameters.
