Claude Code's Entire 512K-Line Source Code Leaked — Reveals Tamagotchi 'Pet' and Always-On Agent
An inadvertent source map file in Claude Code version 2.1.88 exposed more than 512,000 lines of TypeScript to the public, giving anyone who looked a complete view of Anthropic's internal engineering, upcoming features, and the AI system's hidden instruction architecture.

D.O.T.S AI Newsroom
AI News Desk
Anthropic has had a difficult week for operational security. Days after an internal blog post about its next-generation "Claude Mythos" model was briefly exposed via a misconfigured CMS, the company's AI coding tool has now leaked its entire source code through a far more mundane vector: a forgotten source map file shipped inside a production npm package.
When Claude Code version 2.1.88 was pushed to npm, it included a .map file that contained the original TypeScript source code — more than 512,000 lines — in readable form. Source maps are debugging artifacts meant to translate minified JavaScript back to its original source. They are never meant to ship in production builds. But they did, and the developer community noticed immediately.
What the Code Reveals
Developers who dug into the leaked source rapidly surfaced several layers of previously invisible functionality:
A "Tamagotchi-style pet" feature: References in the code describe an interactive companion experience within Claude Code — a persistent virtual entity whose state evolves based on user interaction patterns. The feature does not yet appear in the public interface, suggesting it is in late development or reserved for a future update.
An always-on background agent: The code contains architecture for a persistent agent mode that continues running between explicit user interactions — monitoring context, maintaining state, and potentially acting autonomously based on predefined triggers. This represents a significant architectural departure from Claude's current request-response interaction model.
Anthropic's internal AI instructions: Portions of the source reveal the instructions Anthropic's engineers have given to the AI system that powers Claude Code — the rules, constraints, and behavioral guidelines that govern how the model responds within the coding context. These instructions have never been publicly disclosed.
The "memory" architecture: The code exposes how Claude Code maintains context and synthesizes information across sessions — the mechanisms by which the tool "remembers" project structure, user preferences, and conversation history.
The Significance of the Leak
Source code leaks are not inherently catastrophic for AI companies — the leaked code does not include model weights, training data, or API keys. But the leak has three meaningful consequences.
First, it exposes Anthropic's engineering and product roadmap to competitors. AI labs compete aggressively on feature velocity; knowing what Anthropic is building next narrows the lead time for competitive responses.
Second, the revealed system prompt architecture gives jailbreak researchers a detailed map of Claude Code's internal guardrails — the specific instructions meant to constrain the system's behavior. This is the most sensitive category of the exposed information.
Third, the leak damages the brand in a week that already included a separate, high-profile content management failure. Two significant operational security incidents in five days is a pattern, not an anomaly, and investors and enterprise customers will be paying attention.
Anthropic confirmed the leak and has since removed the source map from the npm package. The company declined to comment on the specific features referenced in the code or the timeline for their release.