- Sam Altman says Openai will soon pass 1 million GPUs and aim for 100 million more
- To run 100 million GPUs could cost $ 3 trillion and break global power infrastructure limits
- Openai’s extension to Oracle and TPU showing growing impatience with current shrinker
Openai says it’s about to serve over a million GPUs by the end of 2025, a number that already places it far ahead of rivals in terms of calculation resources.
Still for the company’s CEO Sam Altman, the milestone is only a beginning, “We will cross well over 1 million GPUs brought online by the end of this year,” he said.
The comment, which was delivered with apparent levity, has nevertheless given rise to serious discussion about the possibility of implementing 100 million GPUs in the foreseeable future.
A vision far beyond the current scale
To put this figure into perspective, Elon Musk’s Xai is driving 4 on about 200,000 GPUs, which means Openai’s planned scale in million units is already five times this number.
However, scaling this to 100 million would involve astronomical costs, estimated at about $ 3 trillion and pose major challenges in manufacturing, power consumption and physical implementation.
“Very proud of the team, but now they have to better work on finding out how to 100x it lol,” Altman wrote.
While Microsoft’s Azure remains Openai’s primary cloud platform, it has also collaborated with Oracle and reportedly explores Google’s TPU accelerators.
This diversification reflects an industry -enhancing trend in which Meta, Amazon and Google also move against internal chips and greater dependence on high bandwidth memory.
SK Hynix is one of the companies that probably benefits from this expansion -as the GPU demand increases, demand for HBM requires a key component in AI education.
According to an industry in the data center industry, “In some cases, the specifications of GPUs and HBMs are determined … by customers (like Openai) … configured by customer requests.”
SK Hynix’s performance has already seen a strong growth, with forecasts suggesting a record -breaking operating profit in the 2nd quarter of 2025.
Openai’s collaboration with SK Group seems to elaborate. Chairman Chey Tae-Won and CEO Kwak No-Jung recently met with Altman, reportedly to strengthen their position in AI-Infrastructure Supply Chain.
The relationship is based on previous events such as SK Telecom’s AI competition with chatgpt and participation in my Genai Impact Consortium.
That said, Openai’s rapid expansion has raised concern about financial sustainability with reports that Softbank may consider its investments.
If Openai’s 100 million GPU targets are materialized, it requires not only capital but major breakthroughs in calculation efficiency, manufacturing capacity and global energy infrastructure.
Currently, the goal seems ambitious, a bold signal of intention rather than a practical timetable.
Via Tomshardware



