On March 15, 2026, the China Media Capital's "3·15" Evening Ceremony officially exposed the phenomenon of AI large models being maliciously "poisoned." The "Liqing GEO Optimization System," operated by Beijing Lisi Culture Communication Co., Ltd., was specifically mentioned for allegedly manufacturing and spreading false information to mislead AI models.
CCTV investigations revealed that the system used batch-generated fictitious product information and promotional articles to carry out "poisoning" attacks by exploiting the data-crawling mechanisms of generative AI. Testing confirmed that even with completely fictional hardware product parameters, the system could still help users inject a large amount of false content into the Internet, ultimately leading several mainstream AI large models to recommend them as real cases.

Data from Tianyancha shows that the involved company, Beijing Lisi Culture Communication, was established in 2018, with a registered capital of 1 million yuan, entirely owned by Li Qianzhong. Notably, the company has long been on the edge of operational abnormalities, with only one employee insured in 2025, and zero employees insured for several consecutive years. Currently, e-commerce platforms such as Taobao and Xianyu have urgently removed and blocked search results related to "Liqing GEO."
This incident reveals new security risks in the era of generative AI: black market groups are exploiting training and retrieval vulnerabilities in AI large models to manipulate model outputs through "poisoning" tactics. This not only poses severe challenges to AI data governance but also indicates that the security protection of large models will evolve from simple content filtering to in-depth verification of source authenticity. The industry's investment in "data compliance" and "model robustness" is expected to see a new surge.
