Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Industry

NVIDIA and Emerald AI Are Building 'Power-Flexible' AI Data Centers That Act as Grid Batteries

NVIDIA and energy startup Emerald AI unveiled a new architecture for AI data centers that can dynamically reduce power consumption during peak grid demand — effectively turning AI factories into grid-scale demand-response assets. The concept reframes AI infrastructure from a pure power consumer to an active participant in grid stability.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
NVIDIA and Emerald AI Are Building 'Power-Flexible' AI Data Centers That Act as Grid Batteries

NVIDIA and Emerald AI announced at CERAWeek — the annual energy industry gathering often described as the Davos of energy — a new design architecture for AI data centers they are calling "power-flexible AI factories." The concept inverts a central assumption of the current AI infrastructure buildout: rather than treating data centers as fixed, maximum-draw consumers of electricity, the architecture enables facilities to dynamically reduce their power consumption during periods of grid stress — functioning as demand-response assets that help stabilize electricity networks.

How Power Flexibility Works

Traditional AI training clusters are designed to consume power at close to maximum capacity continuously. The workloads — training large language models, running inference at scale — are inherently compute-intensive and don't lend themselves to easy interruption. But modern AI infrastructure also includes substantial headroom in its workload scheduling: not every job is equally time-critical, and facilities can defer non-urgent inference or prefetch operations to periods of low grid demand.

Emerald AI's contribution is software that characterizes and schedules this latent flexibility in real time. By integrating with grid operators' demand-response programs, an AI factory running Emerald's platform can commit to reducing power draw by a defined amount — say, 10-15% — within minutes of a grid stability event, earning revenue from grid operators in exchange for that reliability service.

NVIDIA's role in the partnership is to validate that the power-flexibility scheduling is compatible with its hardware stack and to include the architecture in its reference designs for data center customers.

The Strategic Context

The announcement lands at a moment of intense political and policy pressure on AI's energy footprint. Meta's disclosure of 10 dedicated natural gas plants for a single data center campus, published just days earlier, had reignited the conversation about AI infrastructure's climate impact. Hyperscalers including Microsoft, Google, and Amazon are all contending with the gap between their public net-zero commitments and the electricity demand curves their AI buildouts require.

Power-flexible AI factories don't solve the overall energy demand problem — an AI data center running at 85% of maximum power during a grid event is still an enormous electricity consumer. But the architecture addresses a different dimension of the problem: grid stability rather than absolute consumption. A facility that can reliably reduce demand on signal is fundamentally different, from a grid management perspective, than one that draws fixed maximum power regardless of system conditions.

Market Implications

If demand-response participation becomes a standard feature of AI data center design, it creates a novel economic model for facility operators: revenue from grid operators that partially offsets energy costs, in exchange for committing to flexible consumption. In markets with well-developed demand-response programs — Texas, California, parts of Europe — this revenue could be material at the scale of hyperscale AI deployments.

The broader implication is that AI infrastructure, which has been framed primarily as a burden on power grids, may evolve into a participant in grid management — shifting the political economy of AI energy policy from pure liability to mixed stakeholder.

Back to Home

Related Stories

AWS Has Billions in Both Anthropic and OpenAI. Its Boss Explains Why That's Not a Problem.
Industry

AWS Has Billions in Both Anthropic and OpenAI. Its Boss Explains Why That's Not a Problem.

Amazon Web Services CEO Matt Garman defended the company's parallel multi-billion dollar investments in both Anthropic and OpenAI in a wide-ranging interview this week. The explanation reveals a cloud strategy built on AI model agnosticism — and a bet that AWS wins regardless of which AI lab dominates, as long as the compute runs on its infrastructure.

D.O.T.S AI Newsroom
Anthropic Poaches Microsoft's Azure AI Chief to Fix Its Infrastructure Problem
Industry

Anthropic Poaches Microsoft's Azure AI Chief to Fix Its Infrastructure Problem

Anthropic has recruited Eric Boyd, a senior Microsoft executive who led Azure AI services, as its new head of infrastructure. The hire is a direct response to the scaling bottlenecks that have limited Claude's availability during peak demand — and signals that Anthropic is treating infrastructure as a first-tier strategic priority heading into 2026.

D.O.T.S AI Newsroom
Intel's Nerdy Bet on Advanced Chip Packaging Could Decide Who Wins the AI Infrastructure Race
Industry

Intel's Nerdy Bet on Advanced Chip Packaging Could Decide Who Wins the AI Infrastructure Race

As the AI buildout pushes the limits of what individual chips can do, the unglamorous discipline of chip packaging — connecting multiple dies into a single system — is emerging as a genuine competitive moat. Wired reports that Intel is making an aggressive bet on advanced packaging technology that could position the company at the center of the next phase of AI hardware scaling, even as it struggles to compete on raw process technology.

D.O.T.S AI Newsroom