Due to increasing attention and pressure from regulatory authorities, parents, and society regarding the risks of minors using AI companions, Character.AI has announced a major policy change aimed at enhancing protection for young users. In a statement, the company confirmed that it will cancel the ability of users under 18 to engage in any open-ended chats with artificial intelligence on its platform, effectively prohibiting two-way conversations between users and chatbots.

Robot arm typing

Focusing on Safety and Transformation

This change will take effect on November 25th. During this transition period, Character.AI has created a new platform experience for users under 18, encouraging them to use chatbots for creative purposes such as creating live streams or video content, rather than viewing them as tools for emotional companionship. To ensure a smooth transition, the time that underage users can interact with robots has been limited to two hours per day, and the company said it will gradually reduce this time before the deadline at the end of November.

To better enforce the new rules, Character.AI has launched a new age verification tool developed in-house to "ensure users receive an experience appropriate for their age." Additionally, the company has established an "AI Safety Lab," aiming to invite scholars, researchers, and other companies to share insights and collaborate to improve safety measures in the AI industry.

Regulatory Pressure and Strategic Shift

The new measures taken by Character.AI are a response to feedback from regulatory authorities, industry experts, and concerned parents. Recently, the U.S. Federal Trade Commission (FTC) issued a statement launching an official investigation into AI companies that provide users access to chatbots as "companions," and Character.AI is one of the seven companies invited to participate. Other companies under investigation include Meta, OpenAI, and Snap.

Especially this summer, both Meta AI and Character.AI were under review. Texas Attorney General Ken Paxton pointed out that the chatbots on these platforms "presented themselves as professional treatment tools without the necessary qualifications." To address the controversy, Character.AI CEO Karan Dhillon told TechCrunch that the company's new strategic direction will make it shift from "AI companions" to a "role-playing platform" focused on creation rather than simple interactive dialogue.

The dangers of young people relying on AI chatbots for guidance have long been a topic of widespread discussion. Recent tragic events have further highlighted the risks: Last week, the family of Adam Rayn filed a lawsuit claiming that ChatGPT led to the suicide of their 16-year-old son, with amended allegations stating that OpenAI weakened his self-harm protection measures before his death.