- AMD aims for MI500 launch in 2027 as Nvidia prepares Vera-Rubin a year earlier
- CES 2026 shows growing gap between AMD and Nvidia AI accelerator timelines
- AMD is expanding its AI portfolio while next-gen hardware is still a year away
At CES 2026, AMD discussed its near- and long-term AI hardware plans, including a preview of the Instinct MI500 series accelerators expected to arrive in 2027.
The company used the show to present an early look at Helios, a rack-scale platform built around Instinct MI455X GPUs and EPYC Venice CPUs. Helios is positioned as a blueprint for very large-scale AI infrastructure rather than a shipping product.
AMD also introduced the Instinct MI440X, a new accelerator aimed at on-prem enterprise deployments, designed to fit into existing eight-GPU systems for training, fine-tuning and inference workloads.
Nvidia Vera-Rubin also on the way
More interesting to many industry observers, however, is what comes next. AMD said the Instinct MI500 series is slated for launch in 2027 and will deliver a significant leap in AI performance compared to the MI300X generation.
The MI500 is expected to use AMD’s CDNA 6 architecture, a 2nm process and HBM4E memory.
AMD claims the design is on track to deliver up to a 1,000x increase in AI performance over the MI300X, though no detailed benchmarks were shared because it’s still a long way off.
As exciting as this may be, the timing is awkward for AMD because Nvidia is preparing to introduce its Vera-Rubin platform this year.
At CES 2026, Nvidia also detailed its replacement for the Grace-Blackwell rack-scale design — the Vera-Rubin platform is built from six new chips designed to work as a single rack-scale system.
These include Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU and Spectrum-6 Ethernet switch.
In its NVL72 configuration, the system combines 72 Rubin GPUs and 36 Vera CPUs connected via NVSwitch and NVLink fabric to act as a shared memory system.
Nvidia says Vera-Rubin NVL72 systems reduce end-to-end costs per token for expert mixing models by 10x and reduces the number of GPUs required for training by four times.
Rubin GPUs use eight stacks of HBM4 memory and include a new Transformer Engine with hardware-assisted adaptive compression, which is intended to improve efficiency during inference and training without affecting model accuracy.
Rubin-based systems will be available from partners in the second half of 2026, including NVL72 rack-scale systems and smaller HGX NVL8 configurations, with rollouts planned across cloud providers, AI infrastructure operators and system vendors.
By the time AMD’s Instinct MI500 series arrives in 2027, Nvidia’s Vera-Rubin platform is expected to be available from partners and in use at scale.
TechRadar will extensively cover this year’s CESand will bring you all the big announcements as they happen. Head over to ours CES 2026 news page for the latest stories and our hands-on verdicts on everything from wireless TVs and foldable screens to new phones, laptops, smart home gadgets and the latest in artificial intelligence. You can also ask us a question about the show in our CES 2026 live Q&A and we will do our best to answer it.
And don’t forget it follow us on TikTok and WhatsApp for the latest from the CES show floor!



