- Meta’s ORW design sets a new direction for openness in data centers
- AMD is pushing silicon-to-rack openness, even as industry neutrality remains uncertain
- Liquid cooling and Ethernet fabric highlight the system’s serviceability focus
At the recent 2025 Open Compute Project (OCP) Global Summit in San Jose, AMD unveiled its “Helios” rack-scale platform, built on Meta’s recently introduced Open Rack Wide (ORW) standard.
The design was described as an open, double-wide frame that aims to improve power efficiency, cooling and serviceability for artificial intelligence systems.
AMD is positioning “Helios” as a major step toward open and interoperable data center infrastructure, but how much that openness translates into practical industry-wide adoption remains to be seen.
Meta contributed the ORW specification to the OCP community and described it as a foundation for large AI data centers.
The new form factor was developed to meet the growing demand for standardized hardware architectures.
AMD’s “Helios” appears to serve as a test case for this concept, blending Meta’s open-rack principles with AMD’s own hardware.
This collaboration signals a move away from proprietary systems, although reliance on major players such as Meta raises questions about the standard’s neutrality and accessibility.
The “Helios” system is powered by AMD Instinct MI450 GPUs based on the CDNA architecture together with EPYC CPUs and Pensando networks.
Each MI450 reportedly offers up to 432 GB of high-bandwidth memory and 19.6 TB/s of bandwidth, providing capacity for data-intensive AI tools.
At full scale, a “Helios” rack equipped with 72 of these GPUs is expected to reach 1.4 exaFLOPS in FP8 and 2.9 exaFLOPS in FP4 precision, supported by 31 TB of HBM4 memory and 1.4 PB/s of total bandwidth.
It also provides up to 260TB/s internal link throughput and 43TB/s Ethernet-based scaling capacity.
AMD rates up to 17.9 times higher performance than its previous generation and about 50% more memory capacity and bandwidth than Nvidia’s Vera Rubin system.
AMD claims this represents a jump in AI training and inference capabilities, but these are engineering estimates and not field results.
The rack’s open design incorporates OCP DC-MHS, UALink and Ultra Ethernet Consortium frameworks, allowing for both scale-up and scale-out.
It also includes liquid cooling and standards-based Ethernet for reliability.
The “Helios” design extends the company’s chip-to-rack-level openness efforts, and Oracle’s early commitment to deploy 50,000 AMD GPUs suggests commercial interest.
Still, broad ecosystem support will determine whether “Helios” becomes a shared industry standard or remains another branded interpretation of open infrastructure.
As AMD targets 2026 for volume deployment, the degree to which competitors and partners adopt ORW will reveal whether openness in AI hardware can move beyond concept to real practice.
“Open collaboration is key to scaling AI effectively,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD.
“With ‘Helios’, we’re turning open standards into real, deployable systems—combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for next-generation AI workloads.”
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



