AWS Has Billions in Both Anthropic and OpenAI. Its Boss Explains Why That's Not a Problem.
Amazon Web Services CEO Matt Garman defended the company's parallel multi-billion dollar investments in both Anthropic and OpenAI in a wide-ranging interview this week. The explanation reveals a cloud strategy built on AI model agnosticism — and a bet that AWS wins regardless of which AI lab dominates, as long as the compute runs on its infrastructure.

D.O.T.S AI Newsroom
AI News Desk
Amazon Web Services has invested $4 billion in Anthropic and separately contributed to OpenAI's infrastructure through a compute partnership. AWS CEO Matt Garman addressed the apparent conflict directly this week, offering the clearest explanation yet of how Amazon thinks about its position in the AI model wars. The short version: AWS doesn't care who wins, because it provides the compute either way. The longer version reveals a strategy that is either very sophisticated or very exposed depending on how the market develops.
The Garman Argument
Garman's framing draws on AWS's history of competing with its own customers and partners simultaneously — a structural feature of cloud platforms that Amazon has navigated since S3 launched in 2006. The cloud giant runs infrastructure for companies that compete with each other, for companies that are building products that compete with AWS's own services, and for companies that are building tools that compete with AWS's sales channel. "We have an ingrained culture of handling competition," Garman said, "because the cloud giant also competes with its partners." The argument is that this is not a conflict of interest but a feature of platform businesses: the value AWS provides is compute and services, not model capability, and that value is fungible across customers regardless of what they're building.
Why Both Investments Make Sense in AWS Terms
The strategic logic is straightforward if you accept the premise. Anthropic's models run primarily on AWS. The $4 billion investment secures Claude as an anchor workload for AWS's AI infrastructure and gives Amazon preferred access to frontier model capability for its own products like Amazon Q and Bedrock. The OpenAI relationship, which is more recent and structured differently, provides a hedge: if GPT-5 and its successors become the dominant enterprise AI standard, AWS wants to be the preferred infrastructure for deploying them. Garman's position is that these are infrastructure bets, not model bets. He's not claiming Anthropic's Claude or OpenAI's GPT will win; he's claiming AWS wins either way if frontier AI runs on its hardware.
The Risk to This Strategy
The assumption embedded in this strategy is that AI model capability will remain separable from the infrastructure it runs on — that the winning model will not be the one that is vertically integrated with its own custom silicon, proprietary networking, and closed serving stack. Google's TPU infrastructure and Microsoft's custom Maia chips are direct bets against that assumption. If the frontier AI winners are the ones who control their full stack from training silicon to serving infrastructure, then AWS's model-agnostic positioning becomes a disadvantage rather than a hedge. The cloud giant built its dominance on commodity infrastructure economics. The AI frontier may require something more proprietary.