- AI-focused racks are expected to consume up to 1MW each by 2030
- Average racks expected to rise steadily to 30-50 kW within the same period
- Cooling and Power Distribution becomes strategic priorities for future data centers
For a long time, the basic unit of a data center considered the rack of the increase of AI, and a new graph (above) from the Lennox Data Center solutions show how quickly this change is unfolded.
Where they once consumed only a few kilowatts, projections from the company in 2030 hinted at an AI-focused rack could reach 1 MW power use, a scale that was once reserved for entire facilities.
Average data center racks are expected to reach 30-50 kW during the same period, reflecting a steady increase in calculation density and the contrast with AI workloads is striking.
New power supply requirements and cooling
According to projections, a single AI-Rack can use 20 to 30 times the energy from its general counterpart, creating new requirements for power supply and refrigeration infrastructure.
Ted Pulfer, director of Lennox Data Center Solutions, said cooling has become central to the industry.
“Cooling, once ‘part of’ the supporting infrastructure, has now been moved at the forefront of the conversation, driven by increasing calculation densities, AI workload and growing interest in approaches such as fluid cooling,” he said.
Pulfer described the level of industrial cooperation that is now taking place. “Manufacturers, engineers and end users all work closer than ever, share insights and experiments together both in the laboratory and in the real world.
The goal of supplying 1 MW power to a rack also transforms how systems are built.
“Instead of traditional lower voltage AC, the industry is moving towards high voltage DC, such as +/- 400V. This reduces power loss and cable size,” explained pulfer.
“Cooling is handled by the facility ‘Central’ CDUs that control the fluid flow to rackmanifolds. From there, the liquid is delivered to individual cold plates mounted directly on Server’s hottest components.”
Most data centers today are dependent on cold plates, but the procedure has limits. Microsoft has tested microfluidics where small grooves are etched on the back of the chip itself, allowing coolant to flow directly over silicon.
In early experiments, this heat removed up to three times more efficiently than cold plates, depending on the workload, and reduced GPU temperature rise by 65%.
By combining this design with AI, which maps hotspots across the chip, Microsoft was able to direct coolant with greater precision.
Although hyperscalers could dominate this space, Pulfer believes that smaller operators still have room to compete.
“At times, the amount of orders moving through factories can create bottlenecks with delivery, which opens the door for others to step in and add value. In this fast market, flexibility and innovation continue to be key forces throughout the industry,” he said.
What is clear is that electricity and heat rejection are now central problems, no longer secondary to calculate performance.
As Pulfer puts it, “Heat rejection is crucial to keeping the world’s digital foundations to run smoothly, reliably and sustainably.”
At the end of the decade, the form and extent of the stand itself can determine the future of digital infrastructure.



