The $9 Trillion Question: Is the AI Data Centre Boom Building Toward a Historic Bust?
Hyperscalers have committed over $300 billion to AI infrastructure in 2026 alone. Industry analysts are now questioning whether demand projections supporting these investments are realistic — or whether the AI buildout is creating the largest infrastructure bubble since the dot-com fibre glut.

D.O.T.S AI Newsroom
AI News Desk
The numbers are extraordinary by any historical comparison. Microsoft, Google, Amazon, and Meta have collectively committed over $300 billion in AI infrastructure capital expenditure for 2026, up from roughly $200 billion the previous year. At the current trajectory, total global AI data centre investment over the decade is projected to approach $9 trillion. The Financial Times reported this week on growing analyst concern that these projections may be disconnected from the demand fundamentals that would justify them.
The Bull Case
The investment thesis rests on two premises that the hyperscalers state openly. First, that AI will become the dominant compute workload, displacing traditional cloud services as the primary revenue driver for the major platforms. Second, that inference demand — running AI models in production, not just training them — will scale dramatically as AI assistants, agents, and embedded tools proliferate across enterprise workflows.
Both premises have real supporting evidence. Enterprise AI adoption is accelerating, not decelerating. The market for AI-enabled software — productivity tools, coding assistants, customer service automation — is growing at rates that traditional software markets rarely sustain. GPU utilisation rates at major cloud providers remain high.
The Bear Case
The concern articulated by sceptical analysts centres on the efficiency curve. Each generation of AI hardware and model architecture delivers substantially better inference performance per dollar than the previous one. NVIDIA's Blackwell architecture, for instance, delivers roughly four times the inference throughput of its Hopper predecessor at comparable power consumption. If inference efficiency continues to improve at this pace — which the model scaling research broadly supports — the same amount of useful AI compute can be delivered by a significantly smaller physical footprint over time.
The implication is that the demand curve for raw compute capacity may plateau faster than the infrastructure investment timeline assumes. Hyperscalers are building for a world where AI inference runs at current efficiency levels; they are deploying capital for infrastructure that will come online in 2027 and 2028 into an environment where the models running on that infrastructure may require a fraction of the compute that today's equivalents do.
The Dot-Com Parallel
The historical parallel that keeps appearing in analyst notes is the late-1990s fibre optic buildout. Telecoms and infrastructure companies laid hundreds of thousands of miles of fibre on projections of internet traffic growth that were, in absolute terms, correct — internet traffic did grow enormously. But it grew on the back of efficiency improvements that made existing capacity far more valuable, and the overbuilt physical infrastructure took nearly a decade to be absorbed. Many of the companies that built it did not survive to see the demand materialise.
The AI data centre situation is not identical. Unlike dark fibre, AI infrastructure has shorter depreciation cycles and can be repurposed across workloads. But the structural risk — that capital is being deployed at a speed that exceeds the ability to validate the demand assumptions underlying it — is structurally similar. The $9 trillion question the industry faces is not whether AI will be large. It almost certainly will be. The question is whether the infrastructure being built today will still be the right infrastructure when demand arrives at scale.