- New partnership gives OpenAI access to hundreds of thousands of Nvidia GPUs on AWS
- AWS will cluster GB200 and GB300 GPUs for low-latency AI performance
- OpenAI can extend its computing usage further into 2027 under this agreement
The AI industry is developing faster than any other technology in history, and its demand for computing power is huge.
To meet this demand, OpenAI and Amazon Web Services (AWS) have entered into a multi-year partnership that could reshape how AI tools are built and deployed.
The collaboration, valued at $38 billion, gives OpenAI access to AWS’s vast infrastructure to run and scale its most advanced artificial intelligence workloads.
Building a foundation for massive computing power
The deal gives OpenAI immediate access to AWS computing systems powered by Nvidia GPUs and Amazon EC2 UltraServers.
These systems are designed to deliver high performance and low latency for demanding AI operations, including ChatGPT model training and inference.
“Scaling frontier AI requires massive, reliable computing,” said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad computing ecosystem that will power this next era and bring advanced artificial intelligence to everyone.”
AWS says the new architecture will bring together GPUs like the GB200 and GB300 in interconnected systems to ensure seamless processing efficiency across workloads.
The infrastructure is expected to be fully developed by the end of 2026, with room to expand further into 2027.
“As OpenAI continues to push the boundaries of what’s possible, AWS’ best-in-class infrastructure will serve as the backbone of its AI ambitions,” said Matt Garman, CEO of AWS. “The breadth and immediate availability of optimized computing demonstrates why AWS is uniquely positioned to support OpenAI’s massive AI workloads.”
Already known for its scalability in cloud hosting and web hosting, AWS’s infrastructure is expected to play a central role in the partnership’s success.
The data centers handling OpenAI workloads will use tightly coupled clusters capable of managing hundreds of thousands of processing units.
Everyday users may soon notice faster, more responsive AI tools powered by stronger infrastructure behind ChatGPT and similar services.
Developers and enterprises could gain simpler and more direct access to OpenAI’s models through AWS, making it easier to integrate AI into apps and data systems.
However, the possibility of expanding this to tens of thousands of CPUs raises both technical feasibility and logistical questions of cost, sustainability and long-term efficiency.
This rapid scale-up of computing resources can lead to increasing energy consumption and higher costs to maintain such large systems.
Concentrating AI development under large cloud providers may also raise concerns about dependency, control and reduced competition.
OpenAI and AWS have been in business together for quite some time. Earlier this year, OpenAI made its foundation models available through Amazon Bedrock, allowing AWS users to integrate them into their existing systems.
The availability of these models on a major cloud hosting platform meant that more developers could experiment with generative AI tools for data analysis, coding and automation.
Companies such as Peloton, Thomson Pakinomist and Verana Health are already using OpenAI models in the AWS environment to improve their business processes.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



