- Google claims that Ironwood TPU is 24x faster than El Capitan Supercomputer
- An analyst says Google’s performance comparison is “perfectly silly”
- It’s fine to compare AI systems and HPC machines but they serve different purposes
At the recent Google Cloud Next 2025 event, the tech giant claimed its new Ironwood TPU V7P POD is 24 times faster than El Capitan, the exascale-class supercomputer at Lawrence Livermore National Laboratory.
But Timothy prickett morgan from Then Extplatform has rejected the claim.
“Google compares the sustained performance of El Capitan with 44,544 AMD ‘Antares-A’ instinct MI300A Hybrid CPU-GPU Compute engines running High Performance Linpack (HPL) Benchmark at 64-bit fluid point precision against the theoretical top performance of an Ironwood pod with Heoretical Peak Performance. “This is a perfectly silly comparison, and Google’s top brass should not only know better but do it.”
24x performance of El Capitan? No!
Prickett Morgan claims that although such comparisons are valid between AI systems and HPC machines, the two systems serve different purposes -El Capitan is optimized for high precision simulations; Ironwood Pod is tailored to AI-inference and low precision training.
What matters, he adds, is not only pointed performance but costs. “High performance must have the lowest possible costs and no one gets better offers on HPC gear than the US government’s Ministry of Energy.”
Estimates from Then Extplatform Requirements that Ironwood Pod delivers 21.26 Exaflops of FP16 and 42.52 Exaflops of FP8 benefit costs $ 445 million to build and $ 1.1 billion for rent over three years. It results in one cost per Teraflops of $ 21 (Build) or $ 52 (rent).
Meanwhile, El Capitan delivers 43.68 FP16 Exaflops and 87.36 FP8 Exaflops at a building cost of $ 600 million or $ 14 per year. Teraflops.
“El Capitan has 2.05x more performance at FP16 and FP8 resolution than an Ironwood Pod at Peak Theoretical Performance,” notes Prickett Morgan. “Ironwood pod does not have the 24x performance of El Capitan.”
He adds: “HPL-MXP uses a lot of mixed precision calculations to converge to the same result as the All-FP64 Math on the HPL test, and these days provide an effective benefit increase in the order of the order.”
The article also includes a comprehensive table (below) that compares top-end AI and HPC systems on performance, memory, storage and cost effectiveness. While Google’s TPU pods remain competitive, Prickett Morgan maintains that El Capitan from a cost/performance point of view still has a clear advantage.
“This comparison is not perfect, we realize,” he admits. “All estimates are shown in bold red italics, and we have questions where we are unable to make an estimate at this time.”