Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

NSA Spies Are Reportedly Using Anthropic's Mythos — Despite the Pentagon Feud

The NSA has reportedly deployed Anthropic's Mythos, the company's most powerful and restricted AI model, within classified intelligence workflows — a development that adds a sharp irony to the ongoing feud between Anthropic and the Pentagon over the company's security designation. The move suggests America's signals intelligence agency has found a path to Mythos access that circumvents the Department of Defense's contentious restrictions.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

4 min read
NSA Spies Are Reportedly Using Anthropic's Mythos — Despite the Pentagon Feud

The National Security Agency is reportedly using Anthropic's Claude Mythos — the company's most capable and restricted AI model — within classified intelligence analysis workflows, according to reporting by TechCrunch. The revelation is notable on multiple levels. Mythos is a model Anthropic has intentionally kept out of general availability due to its assessed capabilities in finding security vulnerabilities; access has been granted only to a small list of vetted organizations. That the NSA is on that list, or has arranged access through a partner organization that is, adds a significant layer of irony to the widely reported feud between Anthropic and the Pentagon over the company's defense-sector designations.

The Pentagon Feud Context

The tension between Anthropic and the Department of Defense became public when reports emerged that the Pentagon had placed Anthropic on a list of companies subject to heightened restrictions on technology transfer and collaboration with entities in countries deemed adversarial. Anthropic, which has been actively pursuing federal government AI deployments through Amazon Web Services and its own API channels, pushed back on the designation publicly and through lobbying channels. The company's position is that it represents a responsible, safety-focused alternative to less scrupulous AI developers — and that restricting its access to government contracts while competitors face no such restrictions creates a perverse incentive structure that disadvantages safety-conscious AI development. The Pentagon's position, at least implicitly, involves concerns about Anthropic's international research relationships and the dual-use nature of its most capable models.

How the NSA Is Using Mythos

The specific use cases for Mythos within NSA workflows have not been publicly disclosed — which is expected for a signals intelligence agency operating in a classified environment. What can be inferred from Anthropic's stated rationale for Mythos's restricted release is that the model's particular strengths in code analysis, vulnerability identification, and structured reasoning make it valuable for intelligence applications that involve processing large volumes of adversarial code, identifying exploits in foreign systems, and synthesizing intelligence signals across complex data sets. These are exactly the categories where frontier AI capability translates directly into intelligence community value — and where the model's power that makes it too dangerous for general release becomes an asset rather than a liability in a cleared, controlled environment.

What This Signals About AI in Intelligence

The NSA's Mythos deployment is part of a broader pattern in which American intelligence agencies are accelerating their adoption of frontier AI despite — or in some cases because of — the models' most powerful capabilities. The intelligence community's relationship with AI is structurally different from the commercial enterprise market: agencies can operate under security frameworks that allow deployment of models too powerful for public release, they have data that makes fine-tuning for specialized tasks more effective, and the value of marginal intelligence capability improvements justifies costs that would be prohibitive in commercial contexts. The Mythos deployment suggests that Anthropic has successfully navigated at least part of the intelligence community approval process even while its relationship with the broader DoD remains contentious — a bifurcation that will continue to create political and operational complexity for the company as it pursues federal government revenue.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom