Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

US Appeals Court Won't Block Pentagon's National Security Designation of Anthropic

A federal appeals court has refused to issue an emergency stay blocking the Pentagon's designation of Anthropic as a national security concern — a classification that restricts the company's ability to work on certain government contracts and collaborate with foreign entities. The ruling leaves in place a designation that Anthropic has called legally baseless and commercially damaging.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
US Appeals Court Won't Block Pentagon's National Security Designation of Anthropic

A United States federal appeals court has declined to temporarily block the Department of Defense's designation of Anthropic as a national security risk, allowing the Pentagon classification to remain in effect while the underlying legal challenge proceeds through the courts. Anthropic had sought an emergency stay, arguing that the designation was causing immediate and irreparable commercial harm by effectively excluding the company from federal contracting opportunities and creating friction in international business relationships.

What the Designation Means

The Pentagon's designation process — sometimes called "blacklisting" colloquially — does not prohibit Anthropic from operating or selling its products. What it does is create a set of heightened restrictions around government contracting, technology transfer, and collaboration with entities in countries deemed adversarial. For an AI company like Anthropic, which has significant commercial and research relationships with entities globally and has been actively pursuing federal government deployments of Claude through Amazon Web Services, the designation is both financially and reputationally significant.

The Department of Defense has not publicly explained the specific factors that led to Anthropic's designation, citing national security classification concerns. Anthropic has stated publicly that it believes the designation stems from misunderstandings about its corporate structure, investor base, and technology transfer practices — and that the company has cooperated fully with any investigative inquiries it has been made aware of.

The Legal Landscape

The appeals court's refusal to issue an emergency stay does not constitute a ruling on the merits of Anthropic's challenge — it is a procedural finding that the company has not demonstrated the immediate, irreparable harm required to justify the extraordinary remedy of an emergency block before the case is fully litigated. The underlying case will proceed on its normal schedule, which means the designation will remain in place for months, potentially longer, regardless of its ultimate legal validity.

The case is being watched closely by other frontier AI companies and their investors for what it signals about the US government's posture toward the AI sector. The Pentagon's willingness to deploy national security designations against a US-headquartered AI company — one that has positioned itself explicitly as a safety-focused alternative to less cautious competitors — has surprised some observers in Washington who expected such designations to be reserved for companies with more direct Chinese government connections. Legal analysts note that if Anthropic's challenge ultimately succeeds on the merits, it could significantly constrain the executive branch's ability to use national security designations as regulatory tools against domestic AI companies.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom