- Meta explores new hardware paths as cloud vendors scramble to secure capacity
- Google positions its TPUs as a credible option for large deployments
- Data center operators face rising component costs across multiple hardware categories
Meta is reported to be in advanced discussions to secure large amounts of Google’s custom AI hardware for future development work.
The negotiations revolve around renting Google Cloud Tensor Processing Units (TPUs) during 2026 and transitioning to direct purchase in 2027.
This is a shift for both companies, as Google has historically limited its TPUs to internal workloads, while Meta has relied on a broad mix of CPUs and GPUs sourced from multiple vendors.
Meta is also exploring broader hardware options, including interest in RISC-V-based processors from Rivos, suggesting a broader move to diversify its computing base.
The possibility of a multi-billion dollar deal caused immediate market changes, with Alphabet’s valuation soaring, bringing it close to $4 trillion, while Meta also saw its stock rise following the reports.
Nvidia’s stock fell several percentage points as investors speculated on the long-term impact of major cloud providers shifting their spending to alternative architectures.
Estimates from Google Cloud executives suggest a successful deal could allow Google to capture a meaningful share of Nvidia’s data center revenue, which exceeds $50 billion in a single quarter this year.
The scale of demand for AI tools has created intense competition for supply, raising questions about how new hardware partnerships could affect sector stability.
Even if the deal goes ahead as planned, it will enter a market that remains constrained by limited manufacturing capacity and aggressive implementation timelines.
Data center operators continue to report shortages of GPUs and memory modules, and prices are expected to rise through next year.
The rapid expansion of AI infrastructure has strained logistics chains for all major components, and current trends suggest procurement pressures may intensify as companies race to secure long-term hardware commitments.
These factors create uncertainty about the actual impact of the deal, as the broader supply environment may limit production volume regardless of financial investment.
Analysts caution that the future performance of any of these architectures remains unclear.
Google maintains an annual release schedule for its TPUs, while Nvidia continues to iterate its own designs at the same rate.
The competitive landscape may change again before Meta receives its first big shipment of hardware.
There’s also the question of whether alternative designs can offer longer run-time value than existing GPUs.
The rapid evolution of AI workloads means that device relevance can change dramatically, and this dynamic shows why companies continue to diversify their computing strategies and explore multiple architectures.
Via Tom’s Hardware
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



