- Arm enters silicon manufacturing with a CPU designed for large AI workloads
- New AGI CPU doubles rack performance compared to traditional x86 systems
- Meta and OpenAI use Arm chip for next-generation infrastructure
Arm has extended its compute platform to production silicon for the first time with the introduction of what it calls “the next evolution of the Arm compute platform,” the AGI CPU.
The company says the CPU is designed specifically for AI data centers and supports agentic AI workloads, which involve continuously running agents capable of reasoning, planning and acting.
The processor has up to 136 Neoverse V3 cores per CPU, with 6 GB/s memory bandwidth per core and sub-100ns latency, allowing higher workload density and improved system efficiency.
The article continues below
Performance and capacity
The Arm AGI CPU promises deterministic performance under sustained load with a 300-watt TDP and one dedicated core per program thread.
The processor supports air-cooled 1U server chassis with up to 8,160 cores per rack, and liquid-cooled installations that reach 45,000 cores per rack.
Compared to x86 CPUs, the Arm AGI CPU can provide more than twice the performance per rack that supports larger AI workloads while remaining energy efficient.
These features aim to improve compute density, accelerator utilization, and overall infrastructure efficiency.
Meta serves as the lead partner and co-developer of the Arm AGI CPU and integrates it with its Meta Training and Inference Accelerator (MTIA) to optimize data center performance.
Early commercial adoption also includes OpenAI, Cerebras, Cloudflare, Positron, Rebellions, SAP and SK Telecom.
Arm is partnering with OEMs and ODMs such as Lenovo, Supermicro, Quanta Computer and ASRock Rack to provide early systems, with wider availability expected in the second half of 2026.
More than 50 industry leaders across the hyperscale, cloud, semiconductor, memory, networking, software and system design sectors are supporting the CPU’s rollout.
“Over the past decade, we’ve worked closely with Arm to build Graviton here at AWS, and it’s been a remarkable success—the majority of AWS compute capacity added to our fleet by 2025 was powered by Graviton,” said James Hamilton, SVP and Distinguished Engineer, Amazon.
“This collaboration has been great for both companies, and Graviton continues to deliver better value for money to our customers.”
Industry partners also pointed to the broader infrastructure implications of the new CPU.
“The new Arm AGI CPU will further unlock the Arm ecosystem for a wide range of customers and create new opportunities for everyone…” said Charlie Kawwas, president of Semiconductor Solutions Group, Broadcom Inc.
“As Broadcom builds the world’s most capable XPU and networking solutions for hyperscalers… our partnership with Arm has enabled us to move with unmatched intent and speed.”
The Arm AGI CPU is intended to serve as the foundation for agentic AI workloads, enabling organizations to deploy AI tools at scale while maintaining high efficiency.
The processor supports large-scale deployment of AI applications, including accelerator management, control plane processing, and cloud or enterprise-based API and task hosting.
That said, the Arm AGI CPU’s success will depend on data center adoption, integration with existing accelerators and memory, and proven performance gains over alternatives.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.


