Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

Federal Judge Blocks Trump's Anthropic Ban, Calls Pentagon's Security Label 'Orwellian'

U.S. District Judge Rita F. Lin issued a sweeping preliminary injunction against the Trump administration, ruling that classifying Anthropic as a 'supply chain risk' for publicly criticizing AI policy constitutes 'classic illegal First Amendment retaliation' — in a decision that could reshape the boundaries of AI governance.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Federal Judge Blocks Trump's Anthropic Ban, Calls Pentagon's Security Label 'Orwellian'

A federal judge in San Francisco has delivered a decisive legal rebuke to the Trump administration, temporarily blocking an executive order that barred federal agencies from using Anthropic's Claude AI models and declaring the Pentagon's "supply chain risk" designation of the company an unconstitutional act of retaliation.

U.S. District Judge Rita F. Lin issued the preliminary injunction on March 26, 2026, framing her ruling in unusually forceful terms. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," she wrote. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

The Contract Dispute That Started It All

The case traces back to a failed $200 million Pentagon contracting negotiation. The Defense Department sought broad, unrestricted access to Anthropic's Claude models. Anthropic declined, citing its refusal to allow Claude to be used for autonomous weapons systems or mass surveillance applications — usage policies that are core to the company's published responsible scaling commitments.

Defense Secretary Pete Hegseth subsequently designated Anthropic a "supply chain risk," reportedly making it the first American company to receive such a classification. The designation triggered automatic exclusion from federal procurement across all agencies.

Why This Ruling Matters Beyond Anthropic

The injunction's implications extend far beyond a single company's federal contracting status. It establishes, at least preliminarily, that AI companies cannot be administratively punished for publicly articulating usage restrictions based on safety principles. For an industry where lab-stated policies on weapons, surveillance, and autonomy are the primary governance mechanism in the absence of binding federal regulation, Judge Lin's framing matters enormously.

The ruling also arrives as the broader AI policy landscape remains in flux. The EU AI Act's high-risk provisions take effect in stages through 2027. The U.S. Congress has produced no equivalent framework. The executive branch's use of procurement power as de facto AI regulation — now challenged in court — represents a distinctive and legally fraught approach.

A final ruling on the underlying dispute remains pending. Anthropic CEO Dario Amodei has not publicly commented on the injunction.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom