Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

Wikipedia Has Banned AI-Generated Content. The Two Exceptions Tell You Everything.

The Wikimedia Foundation has formally banned AI-generated content from Wikipedia's English-language encyclopedia, citing concerns about accuracy, verifiability, and the erosion of contributor trust. Two exceptions survive: AI can be used for translations, and for minor copy edits. The policy formalizes what many senior Wikipedia editors had already been enforcing informally for months.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Wikipedia Has Banned AI-Generated Content. The Two Exceptions Tell You Everything.

Wikipedia has banned AI-generated content from its encyclopedia. The Wikimedia Foundation published the formal policy in late March 2026, making official what many of the platform's senior editors had already been enforcing on a de facto basis: articles on the English-language Wikipedia cannot contain content generated by large language models.

The ban applies to article content. Two narrow exceptions remain: AI can be used to assist with translations between languages, and to perform minor grammatical or copy edits โ€” specifically in cases where the AI is not generating new claims or sourced information. The restriction does not extend to discussion pages or editor-facing tools.

Why Wikipedia Specifically Matters Here

Wikipedia is the largest collaboratively maintained reference work in human history, and a foundational training data source for many of the AI models now producing the content it is banning. The policy creates a meaningful structural question: AI models trained on Wikipedia-derived data produce outputs that now cannot enter Wikipedia, preserving the encyclopedia's training-data integrity for future model generations โ€” whether or not that was the Wikimedia Foundation's intent.

The verifiability concern is straightforward. Wikipedia's editorial standards require that every claim be sourced to a reliable published reference. AI language models generate plausible-sounding text that frequently cites non-existent sources, misrepresents real ones, or introduces false information with high confidence. For a platform where accuracy is the core product promise, the failure mode is severe.

Contributor Trust and the Asymmetry Problem

The contributor trust issue is subtler but arguably more important for Wikipedia's long-term health. The platform depends on a community of volunteer editors who invest significant time in research, sourcing, and dispute resolution. AI-generated content that can be produced in seconds and submitted at scale undermines the contribution economics that the volunteer model relies on. The ban protects not just accuracy but the social architecture of the project itself.

Wikipedia's decision will be watched closely by other major knowledge platforms grappling with the same tension between AI productivity and content integrity.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered โ€” potentially up to $150 billion โ€” should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days โ€” So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days โ€” So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers โ€” including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom