Wikimedia's long-standing ambiguous attitude toward generative AI has finally reached its end.

On March 27, according to OSCHINA, Wikimedia released a new policy change statement, officially clarifying: Prohibiting the use of large language models (LLMs) to generate or rewrite article content. This policy received overwhelming support from Wikimedia's vast volunteer editing community.

Policy Upgrade: From "Not Recommended" to "Strictly Prohibited"

Wikimedia's revision completely ended the previous ambiguous wording:

Core Conflict: Wikimedia points out that text generated by LLMs usually violates several of the website's core content policies, including factual accuracy and source reliability.

Restrictions on Rewriting: In addition to prohibiting the creation of entries from scratch, large-scale rewriting of existing articles using AI has also been banned.

Only a Small "Window": AI Can Only Act as an Auxiliary Advisor

Although the ban was issued, Wikimedia did not completely eliminate the tool-like attributes of AI, but it set strict boundaries for its usage scenarios:

Basic Editing Suggestions: Editors can use LLMs to provide basic editing suggestions for their own articles, but they must be strictly reviewed by humans.

Strictly Prohibit Introducing New Content: AI can only provide optimization in rhetoric or structure, and it is absolutely not allowed to introduce any new facts or new viewpoints generated by the model when adopting suggestions.

Translation Guidelines: When using LLMs to assist in translating entries, it must strictly follow the Wikimedia LLM-assisted Translation Guidelines to ensure the accuracy of the translation.

Punishment Criteria: Style Is Not the Only Basis, Content Is the Core

To address potential misjudgments, Wikimedia has also established prudent evaluation criteria:

Be Wary of Style Misjudgment: Considering that some editors' writing styles may resemble AI, the policy states that punishment cannot be based solely on linguistic characteristics.

Comprehensive Judgment: Punishment must be combined with the editor's recent modification records, and the text's compliance with core content policies must be emphasized.

Community Consensus: A 40-to-2 Majority Vote

This new policy was not created behind closed doors, but rather emerged from democratic voting by the editing community. According to relevant statistics, the policy passed with 40 votes in favor and 2 votes against. Editors generally worry that false information and "hallucinations" generated by AI will erode Wikipedia's foundation as the world's most reliable knowledge base.

Conclusion: Defending the Dignity of Human Editors