- Arm-based Neoverse CPUs can now communicate directly with Nvidia GPUs efficiently
- NVLink Fusion eliminates PCIe bottlenecks for AI-focused server deployments
- Hyperscalers like Microsoft and Google can use custom Arm CPUs immediately
Nvidia has announced that Arm-based Neoverse CPUs will now be able to integrate with its NVLink Fusion technology.
This integration allows Arm licensees to design processors capable of direct communication with Nvidia GPUs.
Previously, NVLink connections were primarily limited to Nvidia’s own CPUs or servers using Intel and AMD processors, so hyperscalers such as Microsoft, Amazon and Google can now pair custom Arm CPUs with Nvidia GPUs in their workstations and AI servers.
Extending NVLink beyond proprietary CPUs
The development also enables Arm-based chips to move data more efficiently compared to standard PCIe connections.
Arm confirmed that their custom Neoverse design will include a protocol that allows seamless data transfer with Nvidia GPUs.
Armed licensees can build CPU SoCs that connect natively to Nvidia accelerators by directly integrating NVLink IP.
Customers using these CPUs will be able to deploy systems where multiple GPUs are paired with a single CPU for AI workloads.
The announcement was made at the Supercomputing ’25 conference and reflects the participation of both CPU and GPU developers.
Nvidia’s Grace Blackwell platform currently pairs multiple GPUs with an Arm-based CPU, while other server configurations rely on Intel or AMD CPUs.
Microsoft, Amazon and Google are deploying Arm-based CPUs to gain more control over their infrastructure and reduce operating costs.
Arm does not manufacture CPUs itself, but licenses its instruction set architecture and sells designs for faster development of Arm-based processors.
NVLink Fusion support in Arm chips enables these processors to work with Nvidia GPUs without requiring Nvidia CPUs.
The ecosystem also affects sovereign AI projects, where governments or cloud providers might want Arm CPUs for control plane tasks.
NVLink allows these systems to use Nvidia GPUs while maintaining custom CPU configurations.
Softbank, which previously held shares in Nvidia, is backing OpenAI’s Stargate project, which plans to use both Arm and Nvidia chips.
NVLink Fusion integration therefore provides opportunities to pair Arm CPUs with market-leading GPU accelerators in multiple environments.
From a technical perspective, NVLink expansion increases the number of CPUs that can be used in Nvidia-centric AI systems.
It also allows future Arm-based designs to compete directly with Nvidia’s Grace and Vera processors as well as Intel Xeon CPUs in configurations where GPUs are the main compute units.
The development may reduce the appeal of alternative interconnects or competing AI accelerators, but chip development cycles may affect the timing of adoption.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



