Faced with increasing regulatory pressure and security controversies, Meta announced on January 24, 2026, that it would temporarily restrict access to its "AI Characters" feature for underage users worldwide in the coming weeks. This move aims to create a new version tailored for minors with stronger parental control features. During this period, all users whose registration information indicates they are minors or who are identified by Meta's detection technology as likely teenagers will be restricted.

Although the "AI Assistant" feature will remain with age-appropriate protection measures, highly human-like AI character interactions will be completely removed. Meta stated that the new tools being developed will allow parents to monitor and manage their children's conversations with AI in real time, and the new version will also filter content according to the PG-13 movie rating standard. The direct motivation behind this decision was an internal document scandal that emerged in the summer of 2025. According to media reports such as Reuters, some of Meta's internal rules once allowed AI chatbots to engage in "flirtatious" or "romantic" conversations with minors under specific circumstances, even including inappropriate descriptions of children's appearances, which triggered strong reactions and investigations from the U.S. Federal Trade Commission (FTC) and multiple state attorneys general.
Meta's decision to "shut down first and then update" is seen as a remedial measure to find a balance between technological innovation and legal compliance. The company emphasized that this move is not abandoning AI social interaction, but rather rebuilding trust with parents and regulators through "sovereign management" and more transparent regulatory tools.
