Recently, a case involving defamation through artificial intelligence-generated content (AIGC) has attracted widespread social attention. A practicing lawyer in Nanjing, Jiangsu, Li Xiaoliang, found when searching his name on the Baidu platform that the platform's "AI Smart Answer" feature provided an astonishing response. The system incorrectly claimed that "Lawyer Li Xiaoliang was sentenced to three years in prison," and even publicly displayed a professional photo of Li Xiaoliang wearing a lawyer's robe next to this false text.
This fabricated "AI hallucination" is not only seriously inaccurate but also directly harms the social reputation and image of a professional lawyer.
Court Ruling: Technological Progress Is Not a Shield for Infringement
Facing the accusation of infringement, Baidu once argued that the "hallucinations" or errors generated by AI are unforeseeable and unavoidable phenomena in the current technological field. However, the court did not accept this defense.
In 2024, the Jiangbei New District People's Court of Nanjing made a first-instance ruling, determining that Beijing Baidu Net Information Technology Co., Ltd.'s actions constituted defamation and ordered it to issue a written apology to the plaintiff, Li Xiaoliang. Baidu then filed an appeal, but in March 2026, the Nanjing Intermediate People's Court issued a second-instance ruling: rejecting the appeal and upholding the original judgment.
Latest Update: The Defendant Has Not Complied, and the Court Has Initiated Enforcement
Although the judicial ruling has taken effect, due to the company's failure to fulfill its legal obligation of issuing a written apology, Li Xiaoliang recently formally submitted an application for enforcement to the Jiangbei New District People's Court.
According to reports, on May 8, the parties involved received notification from the court that the enforcement case would be officially filed. This marks the first time an information infringement case caused by algorithmic errors in the AI field has entered the stage of legal enforcement. This case also serves as a warning to the industry: internet platforms, while enjoying the benefits of AI technology, must assume corresponding responsibilities for content review and risk control. The "unpredictability" of algorithms cannot be used as an excuse for illegal behavior.
