- The Odinn Infinity Cube combines several Omnia supercomputers in a single glass case
- The memory capacity reaches 86 TB DDR5 ECC registered RAM
- NVMe storage in the cube is a whopping 27.5 PB
Odinn, a California-based startup, has introduced the Infinity Cube as an attempt to compress data center-class computers into a visually contained structure.
At CES 2026, the company unveiled the Odinn Omnia, a mobile AI supercomputer, though a system of that scale alone would face clear throughput limitations, and that’s where the dice come in.
The Infinity Cube is a 14ft x 14ft AI cluster capable of housing multiple Omnia AI supercomputers in a glass enclosure.
Scaling AI with modular clustering
This device emphasizes extreme component density over incremental efficiency improvements.
According to Odinn, a fully customizable core specification allows the Cube to scale up to 56 AMD EPYC 9845 processors, totaling 8960 CPU cores.
Its GPU capacity extends to 224 Nvidia HGX B200 units, paired with 43TB of combined VRAM.
For storage, the device supports up to 86 TB of DDR5 ECC registered RAM, while the NVMe storage capacity reaches 27.5 PB.
These numbers imply significant internal connectivity and power distribution requirements, which the company has not disclosed publicly.
The device uses liquid cooling, with each Omnia device managing its own thermal requirements without shared external infrastructure.
This design avoids reliance on raised floors or centralized cooling systems, at least in theory.
Infinity Cube relies on a proprietary software layer called NeuroEdge to coordinate workloads across the cluster.
The software integrates with Nvidia’s AI software ecosystem and common frameworks and handles planning and deployment automatically.
This abstraction aims to reduce the need for manual tuning, although it also places operational dependencies on Odinn’s software maturity.
Institutions that already rely on cloud infrastructure for AI workloads may question whether on-premises orchestration simplifies management in real-world conditions.
The company states that Infinity Cube is suitable for organizations with strict privacy, security or latency requirements that discourage cloud dependency.
Placing infrastructure closer to workloads can reduce network latency, but it also shifts responsibility for uptime, maintenance and lifecycle management back to the owner.
The idea of ​​presenting data center hardware in compact glass enclosures may appeal aesthetically.
However, the practical trade-offs between density, availability and resilience remain unresolved without evidence of real-world implementation.
TechRadar will extensively cover this year’s CESand will bring you all the big announcements as they happen. Head over to ours CES 2026 news page for the latest stories and our hands-on verdicts on everything from wireless TVs and foldable screens to new phones, laptops, smart home gadgets and the latest in artificial intelligence. You can also ask us a question about the show in our CES 2026 live Q&A and we will do our best to answer it.
And don’t forget it follow us on TikTok and WhatsApp for the latest from the CES show floor!



