- A new feature could almost miraculously reduce the energy consumption of data centers by 30%
- Interrupt request suspension changes dynamic CPU flow consumption and can be performed through us
- Hypercalers are probably the big winners and it will be interesting to see how it affects AI
Data centers reportedly account for between 2-4% of the total electricity consumption around the world, something hyperscalers are understandably looking to reduce where possible.
Potential solutions include the implementation of the next generation’s architectures such as Hyperconverged Infrastructure (HCI) and use advanced cooling techniques.
Professor Martin Karsten at the Cheriton School of Computer Science, within the University of Waterloo in Ontario, Canada, has a cheaper, easier solution. He claims that data center energy consumption could be cut by up to 30%just by changing a few lines with Linux code.
Little change, great influence
In collaboration with Joe Damato at Fastly, Professor Karsten has developed a small, non-intrusive core change on only 30 code lines using IRQ (interrupt request) pension to reduce unnecessary CPU breaks and improve Linux’s network traffic treatment. This fine -tuning has now been published as part of Linux’s latest core, release version 6.13.
This code change, which reportedly improves the Linux network efficiency and increases the flow by up to 45% without rising latency, is based on a research document called “Kernel vs. Network at User Level: Don’t throw out the stack with the interrupts“As Professor Karsten wrote with former master student Peter Cai in 2023.
“We didn’t add anything,” Professor Karsten said of the code change. “We just rearranged what is done when, leading to a much better use of the data center’s CPU cache. It’s a bit like rearranging the pipeline at a manufacture plant so you don’t have people running around all the time.”
Professor believes this little adaptation could have a huge influence. “All of these big companies – Amazon, Google, Meta – use Linux in some capacity, but they are very important with how they decide to use it. If they choose to ‘turn on’ our method in their data centers, it can save gigawatt-hour energy worldwide. Almost every single service request happening on the Internet could be affected by this. “
Aoife Foley, IEEE Senior member and professor of School of Mechanical and Aerospace Engineering at Queen’s University Belfast, welcomes the potential savings, but notes that it will take much more than just change a few coding lines to tackle the wider energy challenges.
“There’s a long way to go yet,” she says. “These facilities represent huge electricity requirements, which adds pressure to the electricity grid and increases the challenge of energy transitions, especially in smaller countries. Although it is impossible to calculate exactly, the entire ICT sector is estimated to take into account approx. 1.4 percent of CO₂ emissions globally. Infrastructure and operational managers have a responsibility here and have to consider the unnecessary waste associated with data storage and commit to generating power from more sustained sources. “
Yandex recently released an open source tool called Perorator, taking a similar approach to Professor Karsten’s research, helping companies optimize their code, reduce the server load and ultimately lower energy and equipment costs.
Sergey Skvortsov, who is leading the team behind the perforator, told us: “This latest research confirms what we have long thought: Optimization of code is one of the most effective ways to reduce the data center’s energy consumption. Perforator helps companies identify and solve ineffective code, cut CPU use by up to 20% and reduce infrastructure costs – without sacrificing the benefit. With data centers that consume up to 4% of global electricity, tools such as perforator can play a crucial role in making technical infrastructure more sustainable. “