Palantir Demos Reveal How AI Chatbots Could Help the Pentagon Generate War Plans
Internal demos and Pentagon procurement records obtained by Wired show Palantir presenting military officials with AI chatbot interfaces — built on models including Anthropic's Claude — capable of ingesting classified intelligence reports and suggesting operational plans, target prioritization frameworks, and logistics recommendations in natural language. The demos represent a materially different use case than the customer service and productivity applications that have dominated enterprise AI discourse: these are decision-support tools operating in high-stakes, life-or-death domains where the cost of model error or manipulation is not a business inconvenience but a potential war crime. The disclosures arrive as Anthropic fights in court to resist Pentagon pressure to deploy Claude without its standard safety constraints, with critics arguing Palantir's demos illustrate precisely the misuse scenarios Anthropic was designed to prevent. Defense officials counter that human commanders retain final authority over all kinetic decisions, and that AI tools that surface options faster ultimately reduce the fog of war and improve decision quality.