Anthropic's Claude Code Leak Spread to 8,000 GitHub Repos Before Takedowns Could Catch It
Despite mass DMCA takedown requests, the proprietary source code behind Claude Code has been cloned and adapted more than 8,000 times on GitHub — with at least one developer rewriting it in multiple languages to circumvent removal. The breach is Anthropic's second major internal leak in days, ahead of a reported $380B IPO.

D.O.T.S AI Newsroom
AI News Desk
Anthropic issued mass copyright takedown requests that removed more than 8,000 copies and adaptations of Claude Code from GitHub — but the effort to contain the leak has already been overtaken by the rate at which developers are adapting what they found. According to the Wall Street Journal, at least one programmer has used AI tools to rewrite the leaked code in multiple programming languages, creating derivative versions that DMCA notices cannot easily reach. The intellectual property breach is the second major internal leak from Anthropic in a matter of days, and arrives at a moment of acute sensitivity: the company is pursuing an IPO at a reported valuation of $380 billion.
What Was Exposed — and Why It Matters
The Claude Code leak disclosed more than scaffolding code. The exposed materials contained the proprietary techniques Anthropic uses to orchestrate AI models functioning as autonomous coding agents — including what the company internally calls a "dreaming" function, designed to help agents consolidate and organize pending tasks during periods of inactivity. For any competitor building a rival coding agent, this is not background context. It is the blueprint for the agentic orchestration layer that distinguishes Claude Code from tools built on generic API access.
The leak did not compromise model weights — the underlying model itself remains proprietary. But the orchestration methodology, context management approach, and the dreaming consolidation mechanism represent months or years of internal research that is now freely available to any engineer who cloned a repository before takedowns began. The asymmetry that defines this kind of IP exposure: distributing proprietary code takes seconds; removing it from the internet once it has spread is a structurally unsolvable problem.
The Second Leak in a Week
This incident follows the earlier accidental publication of internal blog posts about an unreleased model internally named "Claude Mythos" — a next-generation system described in leaked documents as achieving "dramatically higher scores on tests" than any previous Anthropic model, particularly in cybersecurity. Both leaks have been attributed to human errors in Anthropic's content management systems. Two significant operational security failures within a short window suggests a systemic exposure rather than isolated incidents — a pattern that raises questions about internal documentation hygiene at a company now operating at significant scale.
For investors evaluating Anthropic's IPO, the question is not whether the leaked code materially damages any single product. It is whether a company whose core value proposition is proprietary AI research has the operational infrastructure to protect that research as it scales. The valuation case for a $380 billion AI lab is built on the assumption that its technical advantages are defensible. Two weeks of internal leaks test that assumption in a very public way.