Wikipedia Has Banned AI-Generated Content. The Two Exceptions Tell You Everything.
The Wikimedia Foundation has formally banned AI-generated content from Wikipedia's English-language encyclopedia, citing concerns about accuracy, verifiability, and the erosion of contributor trust. Two exceptions survive: AI can be used for translations, and for minor copy edits. The policy formalizes what many senior Wikipedia editors had already been enforcing informally for months.

D.O.T.S AI Newsroom
AI News Desk
Wikipedia has banned AI-generated content from its encyclopedia. The Wikimedia Foundation published the formal policy in late March 2026, making official what many of the platform's senior editors had already been enforcing on a de facto basis: articles on the English-language Wikipedia cannot contain content generated by large language models.
The ban applies to article content. Two narrow exceptions remain: AI can be used to assist with translations between languages, and to perform minor grammatical or copy edits โ specifically in cases where the AI is not generating new claims or sourced information. The restriction does not extend to discussion pages or editor-facing tools.
Why Wikipedia Specifically Matters Here
Wikipedia is the largest collaboratively maintained reference work in human history, and a foundational training data source for many of the AI models now producing the content it is banning. The policy creates a meaningful structural question: AI models trained on Wikipedia-derived data produce outputs that now cannot enter Wikipedia, preserving the encyclopedia's training-data integrity for future model generations โ whether or not that was the Wikimedia Foundation's intent.
The verifiability concern is straightforward. Wikipedia's editorial standards require that every claim be sourced to a reliable published reference. AI language models generate plausible-sounding text that frequently cites non-existent sources, misrepresents real ones, or introduces false information with high confidence. For a platform where accuracy is the core product promise, the failure mode is severe.
Contributor Trust and the Asymmetry Problem
The contributor trust issue is subtler but arguably more important for Wikipedia's long-term health. The platform depends on a community of volunteer editors who invest significant time in research, sourcing, and dispute resolution. AI-generated content that can be produced in seconds and submitted at scale undermines the contribution economics that the volunteer model relies on. The ban protects not just accuracy but the social architecture of the project itself.
Wikipedia's decision will be watched closely by other major knowledge platforms grappling with the same tension between AI productivity and content integrity.