- Populating a single one-gigawatt AI facility costs nearly $80 billion
- Planned AI capacity across industry could total 100 GW
- High-end GPU hardware must be replaced every five years without extension
IBM CEO Arvind Krishna questions whether the current pace and scale of AI data center expansion can ever remain financially sustainable under existing assumptions.
He estimates that the population of a single 1GW site of computer hardware is now approaching $80 billion.
With public and private plans indicating close to 100 GW of future capacity targeting advanced model training, the implied financial exposure rises toward $8 trillion.
Financial burden of next-generation AI websites
Krishna connects this trajectory directly to the update cycle that governs today’s accelerator fleets.
Most of the advanced GPU hardware installed in these centers is depreciated over approximately five years.
At the end of the window, the operators do not extend the equipment, but replace it in full. The result is not a one-time capital hit, but a recurring liability that worsens over time.
CPU resources also remain a part of these implementations, but they are no longer at the center of spending decisions.
The balance has shifted toward specialized accelerators that deliver massively parallel workloads at a pace unmatched by general-purpose processors.
This shift has significantly changed the definition of scale for modern AI facilities and pushed the capital requirements beyond what traditional enterprise data centers once required.
Krishna argues that depreciation is the factor most often misunderstood by market participants.
The pace of architectural change means performance leaps are coming faster than financial write-downs can easily absorb.
Hardware that is still functional becomes economically obsolete long before its physical life ends.
Investors like Michael Burry raise similar doubts about whether cloud giants can keep extending asset lives as model sizes and training requirements grow.
From a financial perspective, the burden no longer lies with energy consumption or land acquisition, but with the forced departure of increasingly expensive hardware stacks.
In workstation-class environments, similar update dynamics already exist, but the scale is fundamentally different within hyperscale sites.
Krishna calculates that servicing the capital costs of these multi-gigawatt campuses would require hundreds of billions of dollars in annual profits just to stay neutral.
This requirement rests on current hardware economics rather than speculative long-term efficiency gains.
These projections come as leading tech companies announce ever-larger AI campuses measured not in megawatts, but in tens of gigawatts.
Some of these proposals already compete with the electricity demand of entire nations, raising parallel concerns about grid capacity and long-term energy prices.
Krishna estimates almost zero odds of today’s LLMs reaching general intelligence on the next generation of hardware without a fundamental change in knowledge integration.
This assessment frames the investment wave as driven more by competitive pressure than by validated technological inevitability.
The interpretation is difficult to avoid. The build assumes that future revenues will scale to match unprecedented expenses.
This is happening even as depreciation cycles are shortened and power limits are tightened across multiple regions.
The risk is that the financial expectations may run ahead of the economic mechanisms needed to sustain them over the entire life cycle of these assets.
Via Tom’s hardware
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



