- Nvidia commits capital and hardware to accelerate CoreWeave’s AI factory expansion
- CoreWeave gets early access to Vera Rubin platforms across multiple data centers
- Financial backing ties Nvidia’s balance sheet directly to growth in AI infrastructure
Nvidia and CoreWeave have expanded their long-standing relationship with an agreement that ties infrastructure deployment, capital investment and early access to future computing platforms.
The deal places CoreWeave among the first cloud providers expected to deploy Nvidia’s Vera Rubin generation, reinforcing its role as a preferred partner for large-scale AI infrastructure.
Nvidia has also committed $2 billion to CoreWeave through an outright stock purchase, underscoring the financial depth of the collaboration.
Scaling AI factories through custom infrastructure
The agreement focuses on accelerating the construction of AI factories, with CoreWeave planning more than five gigawatts of capacity by 2030.
Nvidia’s involvement extends beyond providing accelerators as it supports the procurement of land, power and physical infrastructure.
This approach links capital availability directly to hardware implementation timelines, reflecting how AI expansion increasingly depends on coordination between funding and the provision of computers.
“AI is entering its next frontier, driving the largest infrastructure expansion in human history,” said Jensen Huang, founder and CEO of Nvidia.
“CoreWeave’s deep AI factory expertise, platform software and unmatched execution speed are recognized across the industry. Together, we are driving to meet extraordinary demand for Nvidia AI factories – the foundation of the AI industrial revolution.”
Nvidia and CoreWeave are also deepening customization across infrastructure and software layers.
CoreWeave’s cloud stack and operational tools will be tested and validated alongside Nvidia reference architectures.
“From the beginning, our collaboration has been guided by a simple belief: AI succeeds when software, infrastructure and operations are designed together,” said Michael Intrator, co-founder, chairman and CEO of CoreWeave.
CoreWeave is expected to deploy multiple generations of the Nvidia platform across its data centers, including early adoption of the Rubin platform, Vera CPUs and BlueField storage systems.
This multi-generational strategy suggests that Nvidia is using CoreWeave as a proof-of-concept for full-stack implementations rather than isolated components.
Vera CPUs are expected to be offered as a standalone option, signaling Nvidia’s intent to address CPU limitations that are becoming more apparent as agentic AI workloads grow.
These CPUs use a custom Arm architecture with high core counts, large contiguous memory capacity, and high-bandwidth connections.
“For the very first time, we’re going to offer Vera CPUs. Vera is such an incredible CPU. We’re going to offer Vera CPUs as a standalone part of the infrastructure. And so not only can you run your compute stack on Nvidia GPUs, you can now also run your compute stack wherever their CPU workload is completely, running on N-Huang Jensen” on X.
In practical terms, the collaboration reflects two narratives that shape the current AI market.
Server CPUs are emerging as another pressure point in the supply chain, especially for agent-driven applications.
At the same time, offering advanced CPUs separately gives customers an alternative to full rack-scale systems, which can lower the cost of entry for certain deployments.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



