Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Industry

Cisco's CEO Says Data Centers Should Go to Space to Solve AI's Power and Land Crisis

In a wide-ranging interview, Cisco CEO Chuck Robbins argued that the long-term solution to AI's infrastructure scaling problem — explosive power demand, geographic land constraints, cooling requirements — is orbital data centers. The claim sounds speculative but reflects a serious engineering discussion happening at the intersection of AI infrastructure and space technology.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
Cisco's CEO Says Data Centers Should Go to Space to Solve AI's Power and Land Crisis

Chuck Robbins, CEO of Cisco, made a comment during a recent interview that sounded like science fiction but is increasingly treated as serious infrastructure planning: data centers should move to space. The argument connects directly to the bottlenecks currently limiting AI scaling — power availability, land, and cooling — that terrestrial infrastructure is struggling to solve.

The Infrastructure Constraint That Motivates the Idea

The AI compute buildout of the past two years has collided hard with physical limits. Power grids in the regions where hyperscalers want to build — Virginia's data center corridor, Arizona's sunbelt, the Midwest — are at or near capacity. Utilities are quoting interconnection timelines of five to ten years for new large-scale loads. Land with adequate power and water access is constrained in every major data center market. Cooling, which accounts for a significant fraction of data center operating cost, becomes more expensive as ambient temperatures rise.

These are not software problems. They cannot be addressed by better algorithms or more efficient chips — or at least, not fully. Robbins' argument, in its core form, is that if terrestrial infrastructure cannot scale fast enough to meet AI demand, the infrastructure will have to go somewhere else. Space is the somewhere else that removes the most binding constraints simultaneously.

Why Space Addresses the Constraints

Orbital data centers address the power problem by tapping solar energy directly — without atmospheric losses, with uninterrupted access (depending on orbit), and without the transmission infrastructure that limits terrestrial solar deployment. They address the cooling problem by using the radiative heat dissipation available in the thermal vacuum of space, which can be significantly more efficient than air or liquid cooling in Earth environments. They sidestep the land and grid interconnection problems entirely.

The connectivity challenge — latency between orbital infrastructure and terrestrial users — is real. But for training workloads, which are batch processes that tolerate higher latency, the constraint is less binding than it would be for real-time inference. Orbital data centers for AI training, with terrestrial edge infrastructure for inference, is a plausible architectural split that partially resolves the latency concern.

Where the Serious Work Is Happening

Robbins is not the first infrastructure executive to float the idea. SpaceX's Starship development program has changed the economics of orbital payload delivery dramatically — reducing launch costs to the point where concepts that were financially absurd a decade ago are now worth serious feasibility analysis. Several startups are working on orbital data center concepts explicitly, and hyperscalers have not publicly dismissed the possibility.

Cisco's interest is not incidental. As the company that provides much of the networking infrastructure inside data centers, Cisco has a direct stake in how data center architecture evolves. Orbital data centers would require entirely rethought network architecture — high-bandwidth inter-satellite links, new terrestrial backhaul designs, latency-aware distributed computing frameworks. These are Cisco-sized problems.

Whether orbital data centers become practical at AI training scale in the next decade is genuinely uncertain. But the fact that the CEO of one of the world's largest networking companies is saying it publicly suggests the conversation has moved from speculation to considered possibility.

Back to Home

Related Stories

AWS Has Billions in Both Anthropic and OpenAI. Its Boss Explains Why That's Not a Problem.
Industry

AWS Has Billions in Both Anthropic and OpenAI. Its Boss Explains Why That's Not a Problem.

Amazon Web Services CEO Matt Garman defended the company's parallel multi-billion dollar investments in both Anthropic and OpenAI in a wide-ranging interview this week. The explanation reveals a cloud strategy built on AI model agnosticism — and a bet that AWS wins regardless of which AI lab dominates, as long as the compute runs on its infrastructure.

D.O.T.S AI Newsroom
Anthropic Poaches Microsoft's Azure AI Chief to Fix Its Infrastructure Problem
Industry

Anthropic Poaches Microsoft's Azure AI Chief to Fix Its Infrastructure Problem

Anthropic has recruited Eric Boyd, a senior Microsoft executive who led Azure AI services, as its new head of infrastructure. The hire is a direct response to the scaling bottlenecks that have limited Claude's availability during peak demand — and signals that Anthropic is treating infrastructure as a first-tier strategic priority heading into 2026.

D.O.T.S AI Newsroom
Intel's Nerdy Bet on Advanced Chip Packaging Could Decide Who Wins the AI Infrastructure Race
Industry

Intel's Nerdy Bet on Advanced Chip Packaging Could Decide Who Wins the AI Infrastructure Race

As the AI buildout pushes the limits of what individual chips can do, the unglamorous discipline of chip packaging — connecting multiple dies into a single system — is emerging as a genuine competitive moat. Wired reports that Intel is making an aggressive bet on advanced packaging technology that could position the company at the center of the next phase of AI hardware scaling, even as it struggles to compete on raw process technology.

D.O.T.S AI Newsroom