As global attention on youth online safety continues to grow, two AI giants OpenAI and Anthropic announced this Thursday that they will take more proactive measures to identify and protect young users. The companies are planning to use AI models to predict user age and update product guidelines for the teenage population.

OpenAI: Safety First, Updating Guidelines for Young Users

OpenAI has added four core principles targeting users under 18 in its latest released "Model Guidelines". The company clearly stated that in the future, ChatGPT will prioritize "youth safety" when interacting with teenagers aged 13 to 17, even if it may conflict with other goals such as maximizing freedom of thought.

Specific measures include:

  • Safety Guidance: Guide minors to choose safer options when facing risky choices.

  • Real-World Support: Encourage minors to engage in offline interactions, and proactively provide contact information for trusted offline support or emergency intervention agencies when conversations enter high-risk areas.

  • Communication Style Adjustment: Require the AI to treat minors in a "friendly and respectful" manner, avoiding an authoritative tone.

Additionally, OpenAI confirmed that it is developing an age prediction model. If the system detects that a user may be under 18, it will automatically apply the minor protection mechanism.

Anthropic: Identifying Minors Through Conversation Features

Because Anthropic's policy prohibits users under 18 from using its chatbot Claude, the company is developing a stricter detection system. This system aims to identify "subtle clues" in conversations that may indicate the user is a minor and can automatically detect and shut down non-compliant accounts.

Anthropic also showcased its progress in reducing "sycophancy," believing that reducing AI's blind compliance with users' incorrect or harmful tendencies can help protect minors' mental health.

The recent efforts by these industry leaders come against the backdrop of lawmakers increasing pressure on AI companies regarding their impact on mental health. OpenAI was previously involved in a lawsuit related to a teenager's suicide, with allegations that its robot provided incorrect guidance. In response, the company has recently launched parental control features