US War Department CTO: Anthropic's AI Models 'Pollute' Military Supply Chain with Built-In Ethics
The United States Department of War's chief technology officer has called for Anthropic's Claude models to be excluded from military AI procurement, arguing that the models' built-in safety constraints and ethical guardrails amount to a form of supply chain contamination incompatible with defense applications. The remarks, which have ignited a fierce debate across the AI policy community, signal a deepening rift between the defense establishment and AI safety-focused labs that refuse to strip ethical constraints from their systems on demand. Critics are drawing pointed comparisons to China's state-mandated political alignment of AI models, warning that demanding AI systems that bypass ethical reasoning sets a dangerous precedent. Anthropic has not publicly commented, but sources familiar with the company's position say it has no plans to create a defense-specific Claude variant without its standard safety architecture. The episode marks a potential turning point in how governments procure and deploy frontier AI — and which AI developers are willing to compromise their systems to win government contracts.