- AWS built custom nvidia -cooling after rejected existing floating solutions to scale
- IRHX fits into AWS stands with no changes in existing infrastructure
- Amazon could expand this cooling approach to graviton chips in the future
Amazon Web Services (AWS) has introduced a proprietary cooling system built to handle the requirements of NVIDIA’s latest GPUs.
The heat exchanger in a row of heat exchanges was developed in response to the growing power and heat needs of hardware such as NVIDIA GB200 NVL72.
AWS evaluated existing fluid cooling solutions, but found that they did not fit the company’s needs.
AWS graviton next?
“They would take too much data center floor space, would still require major changes in data centers or significantly increase water consumption,” said Dave Brown, VP Compute and ML services at AWS, in a presentation sent on YouTube, as you can see below.
“And while some of these solutions could work for lower quantities in other providers, they simply wouldn’t be enough fluid cooling capacity to support our scale.”
The IRHX system consists of a pump unit, a water distribution cabinet and fan coils.
The liquid cools chips through a cold plate’s co-designed by AWS and NVIDIA and then cycles back through IRHX where it is cooled and released.
“With IRHX we don’t have to design the data center around the rack,” Brown said.
The system supports AWS’s most powerful EC2 occurrence, P6E Ultraserver, which includes the GB200 NVL72. This rack-scale setup allows 72 Blackwell GPUs to work together as a device.
Brown said the GB200 NVL72 “enables 72 Nvidia Blackwell GPUs to act as a single massive GPU.”
Amazon has previously built custom hardware including chips and networking systems. IRHX is expanding this strategy for cooling, allowing AWS to implement new GPU stands without redesigning its facilities.
The company said the system fits existing rack dimensions and infrastructure, making it scalable across global data centers.
While IRHX is currently paired with Nvidia’s Blackwell-based systems, it is likely to be used with Amazon’s own graviton chips if their cooling needs rise.
Currently, the system operates AI workloads that require both scale and speed.
Look at



