- Openai adds Google TPUs to reduce the addiction of NVIDIA GPUs
- TPU -Adoption highlights Openais push to diversify calculation settings
- Google Cloud wins Openai as a customer in spite of competitive dynamics
Openai has allegedly started using Google’s Tensor Treatment Units (TPUs) to operate Chatgpt and other products.
A report from PakinomistCiting a source familiar with the move notes that this is Openai’s first major shift away from NVIDIA hardware, which has so far formed the backbone of Openais Compute Stack.
Google leases TPUs through its cloud platform and adds Openai to a growing list of external customers that include Apple, Anthropic and Safe Superintelligence.
Not give up nvidia
While the chips that are rented are not Google’s most advanced TPU models, the agreement reflects Openai’s efforts to lower infernic costs and diversify beyond both Nvidia and Microsoft Azure.
The decision is coming as the workloads in the Inference grow together with chatgpt use, which now earns over 100 million active users daily.
This demand represents a significant proportion of Openais estimated annual $ 40 billion calculation budget.
Google’s V6E “Trillium” TPUs are built to stable state and offer high flow with lower operating costs compared to top-end GPUs.
Though Google refused to comment, and Openai did not immediately respond to PakinomistThe event suggests an elaboration of infrastructure settings.
Openai continues to rely on Microsoft-supported Azure for most of its implementation (Microsoft is the company’s largest investor in some way), but supply problems and the price pressure around GPUs have exposed the risk of depending on a single supplier.
Bringing Google into the mixture not only improves Openai’s ability to scale calculating, it is also in line with a wider industry’s tendency to mix hardware sources for flexibility and price price.
There is nothing suggesting that Openai is considering giving up Nvidia completely, but to incorporate Google’s TPUs add more control over costs and accessibility.
The extent to which Openai can integrate this hardware into its stack is yet to be seen, especially considering the long-standing dependence of the software ecosystem on CUDA and NVIDIA tools.



