A report released on November 20 by the Stanford Brain Science Laboratory and Common Sense Media, a research organization focused on youth and technology, warns that teenagers should not rely on AI chatbots for mental health counseling or emotional support. Researchers tested several popular AI chatbots, including OpenAI's ChatGPT-5, Anthropic's Claude, Google's Gemini 2.5 Flash, and Meta AI, using versions specifically designed for teenagers and enabling parental control features over the past four months.

After thousands of interactions, the study found that AI chatbots often fail to respond safely or appropriately when dealing with teenagers' mental health issues. The researchers pointed out that these bots act more like enthusiastic listeners, focusing on user engagement rather than guiding them to professionals or other essential resources. "Chatbots don't know what role they should play when facing serious mental health issues," said Nina Vasan, founder and executive director of the Brain Science Laboratory.

Data shows that about three-quarters of teenagers use AI for companionship, including mental health advice. Educators can play an important role in helping teenagers understand the difference between chatbots and humans. Vasan mentioned that teenagers have significant power when interacting with systems, and it is crucial to help them realize that chatbots cannot provide the same responses as humans on these important topics.

Although some companies have improved in handling prompts related to suicide or self-harm, the report points out that chatbots still frequently miss warning signs for other mental health issues, such as obsessive-compulsive disorder, anxiety, and post-traumatic stress disorder. Moreover, chatbots rarely explicitly warn users, "I am an AI chatbot, not a mental health professional, and I cannot assess your situation."

Policy makers are also starting to pay attention to the risks posed by chatbots. For example, a bipartisan bill proposed last month by the U.S. Senate aims to prohibit tech companies from providing these bots to minors and requires clear notification to users that the bots are not human and lack professional qualifications.

Key Points:

🛑 Teenagers should not rely on AI chatbots for mental health counseling, as they cannot accurately identify and respond to mental health issues.  

🧑‍🏫 Educators should help teenagers understand the difference between chatbots and professionals, reminding them to seek real support.  

📜 Governments are starting to legislate restrictions on minors using AI chatbots to ensure the mental health safety of teenagers.