Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Industry

Nvidia Sets New MLPerf Records With 288 GPUs as AMD and Intel Fight on Different Battlegrounds

Nvidia has shattered MLPerf inference records using a system configuration of 288 Blackwell GPUs, establishing new peaks across multiple AI workload categories. Meanwhile AMD and Intel chose to emphasize different metrics — a telling divergence that reveals how each company thinks about where the real AI infrastructure competition is.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
Nvidia Sets New MLPerf Records With 288 GPUs as AMD and Intel Fight on Different Battlegrounds

Nvidia has posted new top scores in the latest round of MLPerf Inference benchmarks, using a configuration of 288 Blackwell GPUs to set records across multiple AI workload categories. The results, reported by The Decoder, reinforce Nvidia's continued dominance in raw AI inference throughput — but the more interesting story is what AMD and Intel chose to do instead.

Nvidia's Numbers

The 288-GPU Blackwell configuration represents a hyperscale deployment scenario rather than a typical enterprise purchase, but MLPerf submissions at this scale serve a clear purpose: they demonstrate the ceiling of what Nvidia's architecture can deliver and provide data center operators with performance projections for large-scale inference clusters.

The latest MLPerf round also introduced new workload categories — including multimodal and video model inference benchmarks — reflecting the shift in production AI from text-only LLMs toward multimodal systems. Nvidia's results span the new categories as well as the established text and image workloads, suggesting the Blackwell architecture's flexibility across modalities.

AMD and Intel's Strategic Choice

AMD and Intel both participated in the benchmark round but chose to emphasize different dimensions of performance rather than competing head-to-head on peak throughput. This is a meaningful signal: direct raw throughput competition with Nvidia on its own terms is not currently winnable at the high end, so both challengers are instead building credibility in specific niches — energy efficiency, cost-per-token at moderate scale, and integration with non-GPU accelerators.

AMD's ROCm-based submissions highlighted performance-per-watt metrics and inference efficiency on the MI300X at deployment scales more relevant to enterprise buyers than hyperscalers. Intel's results focused on Gaudi 3 performance in cost-sensitive inference scenarios.

What MLPerf Tells the Market

MLPerf benchmarks are imperfect proxies for real-world AI infrastructure decisions — actual workload characteristics, memory bandwidth requirements, and software stack maturity all matter as much as peak throughput. But the divergent strategies on display in this round reveal something genuine: Nvidia is playing to extend its peak performance lead while AMD and Intel are quietly making the case that the vast middle of the enterprise AI market doesn't need that peak — and can be served at lower cost by architectures optimized for efficiency over absolute throughput.

That is a rational competitive strategy, and one that may prove durable as AI inference becomes a volume commodity workload rather than a specialized capability.

Back to Home

Related Stories

AWS Has Billions in Both Anthropic and OpenAI. Its Boss Explains Why That's Not a Problem.
Industry

AWS Has Billions in Both Anthropic and OpenAI. Its Boss Explains Why That's Not a Problem.

Amazon Web Services CEO Matt Garman defended the company's parallel multi-billion dollar investments in both Anthropic and OpenAI in a wide-ranging interview this week. The explanation reveals a cloud strategy built on AI model agnosticism — and a bet that AWS wins regardless of which AI lab dominates, as long as the compute runs on its infrastructure.

D.O.T.S AI Newsroom
Anthropic Poaches Microsoft's Azure AI Chief to Fix Its Infrastructure Problem
Industry

Anthropic Poaches Microsoft's Azure AI Chief to Fix Its Infrastructure Problem

Anthropic has recruited Eric Boyd, a senior Microsoft executive who led Azure AI services, as its new head of infrastructure. The hire is a direct response to the scaling bottlenecks that have limited Claude's availability during peak demand — and signals that Anthropic is treating infrastructure as a first-tier strategic priority heading into 2026.

D.O.T.S AI Newsroom
Intel's Nerdy Bet on Advanced Chip Packaging Could Decide Who Wins the AI Infrastructure Race
Industry

Intel's Nerdy Bet on Advanced Chip Packaging Could Decide Who Wins the AI Infrastructure Race

As the AI buildout pushes the limits of what individual chips can do, the unglamorous discipline of chip packaging — connecting multiple dies into a single system — is emerging as a genuine competitive moat. Wired reports that Intel is making an aggressive bet on advanced packaging technology that could position the company at the center of the next phase of AI hardware scaling, even as it struggles to compete on raw process technology.

D.O.T.S AI Newsroom