- NVIDIA pushes SSD manufacturers against 100 million IOPS -storage target
- But Silicon Motion CEO says industry lacks memory technology to meet AI requirements
- New memory may be needed to unlock ultra-fast AI storage
When GPUs grow faster and memory bandwidth scales into terabytes per day. Second, storage has become the next large bottleneck in AI computing.
Nvidia is looking to push up for storage to match the requirements of AI models by hitting an ambitious target for small-block random readings.
“Right now they are aiming at 100 million IOPS – which is huge,” said Wallace C. Kuo, CEO of Silicon Motion, told Toms Hardware.
Today’s fastest PCIE 5.0 SSDS is top of approx. 14.5 GB/s and 2 to 3 million IOPs in workloads involving 4K and 512-byte readings.
While larger blocks favor bandwidth, the AI inferens typically draw small, scattered databits. It makes 512B random read more relevant and much harder to accelerate.
Kioxia is already preparing an “AI SSD” based on XL flash that is expected to exceed 10 million IOPs. It could launch together with Nvidia’s upcoming Vera Rubin Platform next year. But scaling beyond it could require more than just faster controllers or NAND adjustments.
“I think they’re looking for a media change,” Kuo said. “Optane should be the ideal solution, but it’s gone now. Kioxia is trying to bring XL-Nand and improve its performance. Sanny tries to introduce high bandwidth flash, but frankly I don’t really believe it.”
Power, cost and latency are all challenges. “The industry really needs something basic new,” KUO added. “Otherwise, it will be very difficult to achieve 100 million IOPs and still be cost -effective.”
Micron, Sandisk and others run to invent new forms of non-volatile memory.
Whether any of them arrive in time for Nvidia’s next wave of hardware is the great unknown.



