Anthropic Secures Multi-Gigawatt TPU Agreement With Google and Broadcom as Revenue Surpasses $30B Annualized
Anthropic has locked in a landmark infrastructure deal with Google and Broadcom for multiple gigawatts of TPU computing capacity, set to come online in 2027. The agreement coincides with explosive revenue growth — annualized revenue now exceeds $30 billion, up from $9 billion at the end of 2025, and enterprise accounts generating more than $1 million per year have doubled since February.

D.O.T.S AI Newsroom
AI News Desk
Anthropic has secured what the company describes as a multi-gigawatt agreement with Google and Broadcom for Tensor Processing Unit (TPU) capacity, with infrastructure expected to become operational starting in 2027. The deal is the clearest public signal yet of just how aggressively the company is scaling — and how much infrastructure it believes it will need to serve the demand it is already generating.
The Numbers Behind the Deal
Anthropic's annualized revenue rate now exceeds $30 billion, up from approximately $9 billion at the close of 2025. The company has also doubled its count of enterprise customers generating more than $1 million in annual revenue since February, surpassing 1,000 such accounts. These figures, disclosed alongside the infrastructure announcement, are extraordinary even by the standards of an AI market that has seen rapid growth. They suggest that enterprise Claude adoption is not a pilot phenomenon — it is a deployment-stage phenomenon, at scale, with paying customers under annual contracts.
The Hardware Strategy
The deal extends a strategy Anthropic has pursued deliberately: training Claude across multiple silicon ecosystems rather than depending on any single hardware vendor. Claude currently trains on Amazon's AWS Trainium chips, Google's TPUs, and Nvidia's GPUs — making it the only major frontier model available natively across all three major cloud platforms: AWS, Google Cloud, and Microsoft Azure. Amazon remains Anthropic's largest cloud partner, but the Google-Broadcom TPU agreement diversifies and massively expands the compute base available for future model generations.
Securing gigawatts of capacity years in advance reflects the reality of frontier AI development: you cannot negotiate for compute the week you need it. The infrastructure required to train and serve increasingly capable models is built on timelines that precede the models themselves. Companies that have not locked in capacity now may find themselves constrained when the next generation of systems is ready to train.
What This Signals for the Market
For enterprises evaluating AI vendors, Anthropic's revenue trajectory and infrastructure depth increasingly function as stability signals. A company with $30 billion in annualized revenue and locked-in multi-year compute capacity is a different counterparty than a well-funded startup. The enterprise accounts doubling in two months is particularly significant: those are organizations that have moved past pilots and are running production workloads, generating seven-figure annual commitments, and presumably expanding from there.
Google's position in this deal is also worth noting. As both an Anthropic investor and its TPU supplier, Google has structured its relationship with Anthropic as a multi-layer partnership. The compute agreement deepens that tie — and gives Google data on one of the largest TPU workloads on the planet, which feeds back into hardware roadmap decisions for future Tensor Processing Unit generations.