Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Breaking

Leaked: Anthropic's 'Claude Mythos' Is the Most Powerful AI Model Ever Built

Internal documents accidentally exposed via a misconfigured content management system reveal Anthropic's next-generation model, codenamed 'Claude Mythos,' achieves dramatically higher scores in coding, reasoning, and cybersecurity than any previous AI system — including GPT-4o and Gemini Ultra.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Leaked: Anthropic's 'Claude Mythos' Is the Most Powerful AI Model Ever Built

An accidental leak from Anthropic's internal content management system has revealed the company's most ambitious AI project to date: a new frontier model codenamed Claude Mythos — alternatively referenced in some documents as "Capybara" — that the company describes as "a step change" beyond its existing Opus line.

The leak, triggered by a misconfigured default setting that inadvertently made approximately 3,000 internal documents publicly accessible, included two near-final draft blog posts and technical evaluation summaries. D.O.T.S AI News reviewed the exposed materials before they were secured.

What the Documents Reveal

According to the leaked materials, Claude Mythos is described as "larger and more intelligent" than any model Anthropic has previously released. The documentation claims the model achieves "dramatically higher scores on tests" across three key capability domains: software engineering, academic reasoning, and — most notably — cybersecurity.

One internal summary states the model is "far ahead of any other AI model in cyber capabilities," a claim that will draw intense scrutiny from the AI safety community. Anthropic has historically been the most vocal major lab on the importance of responsible capability deployment, particularly in dual-use domains.

The Name and What It Signals

The naming rationale is revealing. Both leaked draft posts include an identical explanation: the "Mythos" designation was chosen to evoke "the deep connective tissue that links together knowledge and ideas." This suggests Anthropic views the model not merely as a performance increment but as a qualitatively different kind of reasoning system — one capable of synthesizing across knowledge domains in ways prior models cannot.

Rollout Strategy: Deliberate and Security-Focused

The leaked documents outline a deliberately controlled release strategy. Initial access is being restricted to a small group of organizations specifically evaluating cybersecurity applications. API access will expand gradually, with Anthropic citing both safety evaluations and cost efficiency as gating criteria.

"The model is very expensive to serve," one document notes, indicating the company is working to optimize inference efficiency before broader availability. This mirrors the staged rollout strategy Claude 3 Opus received in early 2024, though the cybersecurity capability focus adds an additional layer of caution.

Industry Context

The disclosure arrives at a consequential moment for Anthropic. The company is simultaneously fighting a federal legal battle against the Trump administration's attempt to classify it as a supply chain risk — a dispute that itself stemmed from Anthropic's refusal to grant the Pentagon unrestricted Claude access for autonomous weapons applications.

The combination of a breakthrough cybersecurity-capable model and an ongoing dispute with the Defense Department over AI safety guardrails places Anthropic at the sharpest edge of the capability-safety tension that defines the current AI era.

Anthropic confirmed to Fortune that it is "actively training and testing" the model. The company declined to comment further on the leaked documents.

Back to Home

Related Stories