Policy Synthesized from 3 sources

Wikipedia Bans AI Articles After Editor Vote

Key Points

  • 1,900+ English Wikipedia editors voted on March 20, 2026 to ban AI-generated articles
  • Policy cites LLM text often violates Wikipedia's core content policies
  • Exceptions: basic copyediting (no new content) and cross-language translation only
  • Enforcement depends on community flagging—volunteer burnout is a documented challenge
  • Ban follows months of prior failed attempts to restrict AI use on the platform
References (3)
  1. [1] Wikipedia bans AI-generated articles, requires disclosure — TechCrunch AI
  2. [2] Wikipedia Prohibits AI-Written Articles — The Verge AI
  3. [3] Wikipedia Bans AI-Generated Articles — 404 Media

On March 20, 2026, Wikipedia's 1,900-plus active English-language volunteer editors voted to implement one of the most sweeping restrictions any major platform has placed on AI-generated content. The policy is now active—and it has teeth.

The new rule is direct: no using large language models to generate or rewrite Wikipedia articles. Full stop. The policy text states that "LLM-generated text often violates several of Wikipedia's core content policies," making blanket prohibition the only viable path. This is not a proposal sitting in committee. Three sources confirm to AI Pulse that the policy went into effect immediately upon the vote.

The ban does carve out narrow exceptions. Editors may use LLMs for basic copyediting—fixing typos, adjusting tone—but only when the tool "does not introduce content of its own." Translating articles from other language editions of Wikipedia into English also remains permitted. Everything else involving AI content generation is off-limits.

The stakes are considerable. Wikipedia fields over 6 million English-language articles and ranks among the top ten most-visited websites globally. Its volunteer workforce has long operated on principles of verifiable sourcing and human accountability. AI-generated text, which often presents confident assertions without traceable provenance, fundamentally conflicts with those principles.

Wikipedia's decision follows months of internal debate. Previous attempts to restrict LLM use on the platform generated significant discussion but failed to produce binding policy. The March 20 vote marks a decisive turn. "Text generated by large language models often violates several of Wikipedia's core content policies," the new policy reads. "For this reason, the use of LLMs to generate or rewrite article content is prohibited."

Critics of the ban will note that enforcement remains community-dependent. Wikipedia has no automated system to detect AI-written text—only human reviewers can flag suspicious edits. The platform has historically struggled with moderation capacity; volunteer burnout is a documented challenge. Whether the policy's letter translates to consistent enforcement in practice will depend on whether editors invest the time to police it.

Others will argue the ban doesn't go far enough. The policy targets article creation and substantial rewriting, but leaves gray areas: using AI to research topics, brainstorm structures, or identify gaps in existing articles. A bot that helps an editor think but doesn't write prose is not prohibited. Some observers question whether this distinction is meaningful or merely cosmetic.

What is clear is Wikipedia's direction of travel. The world's largest encyclopedia—built over 24 years on the premise that knowledge should be human-verified and source-attributed—has drawn an explicit line on AI-authored content. That line may shift as technology evolves and the community reassesses. But for now, Wikipedia has spoken with the weight of binding policy, not another committee white paper. The platform that anyone can edit has decided: not by machines.

0:00