Cisco's CEO Says Data Centers Should Go to Space to Solve AI's Power and Land Crisis
In a wide-ranging interview, Cisco CEO Chuck Robbins argued that the long-term solution to AI's infrastructure scaling problem — explosive power demand, geographic land constraints, cooling requirements — is orbital data centers. The claim sounds speculative but reflects a serious engineering discussion happening at the intersection of AI infrastructure and space technology.

D.O.T.S AI Newsroom
AI News Desk
Chuck Robbins, CEO of Cisco, made a comment during a recent interview that sounded like science fiction but is increasingly treated as serious infrastructure planning: data centers should move to space. The argument connects directly to the bottlenecks currently limiting AI scaling — power availability, land, and cooling — that terrestrial infrastructure is struggling to solve.
The Infrastructure Constraint That Motivates the Idea
The AI compute buildout of the past two years has collided hard with physical limits. Power grids in the regions where hyperscalers want to build — Virginia's data center corridor, Arizona's sunbelt, the Midwest — are at or near capacity. Utilities are quoting interconnection timelines of five to ten years for new large-scale loads. Land with adequate power and water access is constrained in every major data center market. Cooling, which accounts for a significant fraction of data center operating cost, becomes more expensive as ambient temperatures rise.
These are not software problems. They cannot be addressed by better algorithms or more efficient chips — or at least, not fully. Robbins' argument, in its core form, is that if terrestrial infrastructure cannot scale fast enough to meet AI demand, the infrastructure will have to go somewhere else. Space is the somewhere else that removes the most binding constraints simultaneously.
Why Space Addresses the Constraints
Orbital data centers address the power problem by tapping solar energy directly — without atmospheric losses, with uninterrupted access (depending on orbit), and without the transmission infrastructure that limits terrestrial solar deployment. They address the cooling problem by using the radiative heat dissipation available in the thermal vacuum of space, which can be significantly more efficient than air or liquid cooling in Earth environments. They sidestep the land and grid interconnection problems entirely.
The connectivity challenge — latency between orbital infrastructure and terrestrial users — is real. But for training workloads, which are batch processes that tolerate higher latency, the constraint is less binding than it would be for real-time inference. Orbital data centers for AI training, with terrestrial edge infrastructure for inference, is a plausible architectural split that partially resolves the latency concern.
Where the Serious Work Is Happening
Robbins is not the first infrastructure executive to float the idea. SpaceX's Starship development program has changed the economics of orbital payload delivery dramatically — reducing launch costs to the point where concepts that were financially absurd a decade ago are now worth serious feasibility analysis. Several startups are working on orbital data center concepts explicitly, and hyperscalers have not publicly dismissed the possibility.
Cisco's interest is not incidental. As the company that provides much of the networking infrastructure inside data centers, Cisco has a direct stake in how data center architecture evolves. Orbital data centers would require entirely rethought network architecture — high-bandwidth inter-satellite links, new terrestrial backhaul designs, latency-aware distributed computing frameworks. These are Cisco-sized problems.
Whether orbital data centers become practical at AI training scale in the next decade is genuinely uncertain. But the fact that the CEO of one of the world's largest networking companies is saying it publicly suggests the conversation has moved from speculation to considered possibility.