- Oracle claims its Zettascale10 system can reach a peak of 16 zettaFLOPS
- The project uses around 800,000 Nvidia GPUs distributed across data centers
- OpenAI’s Stargate cluster in Texas runs on Oracle’s new infrastructure
Oracle has announced what it calls the largest AI supercomputer in the cloud, the OCI Zettascale10.
The company claims the system can deliver 16 zettaFLOPS of peak performance across 800,000 Nvidia GPUs.
This output, when divided, equates to about 20 petaflops per GPU, which is roughly equivalent to the Grace Blackwell GB300 Ultra chip used in high-end desktop AI systems.
Network design for large AI workloads
Oracle says the platform is the foundation of OpenAI’s Stargate cluster in Abilene, Texas, built to handle some of the most demanding AI workloads now emerging in research and commercial use.
“The highly scalable custom RoCE design maximizes gigawatt-scale fabric performance while keeping most of the power focused on compute…,” said Peter Hoeschele, vice president, Infrastructure and Industrial Compute, OpenAI.
At the core of the Zettascale10 system is the Oracle Acceleron RoCE network, designed to increase scalability and reliability for data-heavy AI operations.
This architecture uses network cards as mini-switches that connect GPUs across multiple isolated network planes.
The design aims to reduce latency between GPUs and allow jobs to continue running if a network path fails.
“With Nvidia full-stack AI infrastructure, OCI Zettascale10 provides the computing power needed to advance advanced AI research and help organizations everywhere move from experimentation to industrialized AI,” said Ian Buck, vice president of Hyperscale, Nvidia.
Oracle claims this structure can lower costs by simplifying tiers of the network while maintaining consistent performance across nodes.
It also introduces Linear Pluggable and Receiver Optics to reduce power and cooling consumption without cutting bandwidth.
While Oracle’s numbers are impressive, the company has not provided independent verification of its 16 zettaFLOPS claim.
Cloud performance metrics may vary depending on how throughput is calculated, and Oracle’s comparison may rely on theoretical peaks rather than sustained speeds.
Given that the system’s advertised total is equal to the sum of 800,000 top-end GPUs, real-world performance may depend heavily on network design and software optimization.
Analysts can wait to see if the configuration delivers performance comparable to leading AI clusters already operated by other major cloud providers.
Zettascale10 places Oracle alongside other major players racing to provide the infrastructure behind the best GPUs and AI tools.
The company says customers could train and deploy large-scale models across Oracle’s distributed cloud environment, backed by data sovereignty measures.
Oracle also says Zettascale10 offers operational flexibility through aircraft-level independent maintenance, allowing updates with less downtime.
“With OCI Zettascale10, we are merging OCI’s Oracle Acceleron RoCE network architecture with next-generation Nvidia AI infrastructure to deliver multi-gigawatt AI capacity at unprecedented scale,” said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure.
“Customers can build, train and deploy their largest AI models in production using less power and will have the freedom to operate across Oracle’s distributed cloud with strong data and AI sovereignty…”
Still, observers note that other vendors are building their own large GPU clusters and advanced cloud storage systems, which could narrow Oracle’s advantage.
This system will roll out next year, and only then will it be clear whether the architecture can meet the demand for scalable, efficient and reliable AI computing.
Via HPCWire
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



