Anthropic Accidentally Mass-Deleted Thousands of GitHub Repos Trying to Suppress Its Own Leaked Source Code
In an attempt to remove its leaked Claude source code from the internet, Anthropic filed bulk DMCA takedown notices that swept up thousands of unrelated GitHub repositories. The company says it was an accident and has retracted the bulk of the notices — but the incident raises uncomfortable questions about AI labs using DMCA as a blunt instrument.

D.O.T.S AI Newsroom
AI News Desk
Anthropic triggered a wave of accidental GitHub repository deletions after filing overbroad DMCA takedown notices intended to remove leaked Claude source code from the platform — a move the company has since described as an unintended error, according to TechCrunch.
What Happened
The sequence of events traces back to an earlier incident: Claude source code was leaked and spread across GitHub repositories. Anthropic responded by issuing bulk DMCA takedown notices to GitHub, which is standard practice for copyright holders attempting to remove infringing content at scale. The problem was the targeting: the automated notice system flagged and removed thousands of repositories that were not hosting Anthropic's leaked code.
The collateral damage included open-source projects, personal repositories, and community forks that happened to trigger false positives in Anthropic's detection system. Developers woke up to find their code had vanished — not because of anything they had done, but because they were caught in a blast radius of a poorly calibrated bulk enforcement action.
Anthropic's Response
Anthropic executives confirmed to TechCrunch that the mass takedown was unintentional, characterizing it as a misconfiguration error in the automated notice system rather than a deliberate enforcement overreach. The company said it is working to retract the erroneous notices and restore affected repositories. GitHub, which processes millions of DMCA requests, is typically responsive to retraction requests but restoration timelines vary.
The Bigger Issue
The incident surfaces an important tension in how AI labs handle intellectual property enforcement. Bulk automated DMCA systems are legitimate legal tools — but their blast radius when misconfigured is substantial. For developers whose repositories were deleted without warning or due process, "it was an accident" provides cold comfort if their code, commit history, or issues were not backed up.
More structurally: this incident is a reminder that leaked model code is extraordinarily difficult to contain once distributed. DMCA notices can remove copies from indexed platforms, but cannot un-cache content from mirrors, archives, or private copies. The enforcement action likely cost Anthropic more in developer goodwill than it gained in containment effectiveness.
The irony is not subtle: a company that builds AI systems in part to automate complex tasks had its automation go badly wrong in exactly the domain where errors have real human consequences.