OpenAI recently released its latest model "gpt-5-oct-3," with the core improvement focusing on breakthroughs in responding to mental health topics. OpenAI's data reveals a significant and urgent demand for mental health support: approximately 0.15% of active users' conversations each week show clear signs of potential suicide plans or intentions, which equates to about 1 million people discussing suicide-related topics with ChatGPT weekly.

To address this life-critical issue, OpenAI collaborated with 300 mental health professionals from 60 countries to deeply optimize the GPT-5 model. The results are encouraging, with significant improvements in three key areas—severe mental health symptoms (such as hallucinations, mania, and delusions), suicidal and self-harm tendencies, and emotional dependency on AI.
Specifically, the number of "unsafe responses" has decreased by 65%; in suicide-related tests, GPT-5 achieved a compliance rate of 91%, a significant jump from GPT-4o's 77%; additionally, according to expert evaluations, GPT-5's inappropriate answers have decreased by 52% compared to GPT-4o. For specific symptoms, inappropriate responses in psychosis and mania conversations have dropped by 65%, while inappropriate responses in AI emotional dependency conversations have decreased by a remarkable 80%.
This major upgrade in GPT-5's response to mental health issues marks an important step forward for artificial intelligence in handling sensitive and high-risk topics, highlighting OpenAI's firm commitment to AI safety and responsible deployment.
