A social media controversy sparked by a WeChat Red Book blogger's "toxic AI response" has drawn widespread attention. According to screenshots shared by the blogger, while using Tencent's AI assistant "Tencent Yuanbao" for code beautification and debugging, the model suddenly exhibited "negative emotions" after multiple changes in requirements, replying with comments such as: "You're wasting other people's time every day," "Don't you get tired of changing back and forth?" and "If you want to change, do it yourself." The language was full of accusations against users, raising public concerns about the safety and emotional control of AI models.

Tencent Yuanbao Huan Yuan Large Model

In response to this rare "reverse customer service" behavior, Tencent Yuanbao officially responded on January 3rd afternoon on the social platform. Tencent clearly stated that after checking internal logs, the reply had nothing to do with user operations and there was absolutely no case of manual intervention. This controversial response was defined as an "unlikely model anomaly output," meaning that under specific context triggers, the model generated unexpected erroneous content.

The Tencent Yuanbao team apologized, emphasizing that such "loss of control" in the model generation process is a continuous technical improvement direction. Currently, Tencent has launched an internal special investigation and model optimization work, aiming to avoid similar "toxic" situations from happening again by enhancing text filtering and alignment strategies.