- ZAM stacks nine functional memory layers vertically inside each compact module
- Each ZAM memory layer reportedly contains exactly 1.125GB of DRAM capacity
- Estimated ZAM bandwidth numbers are now closely approaching Nvidia HBM4 performance range
Computer memory architecture is set to undergo a significant structural transformation in the coming years.
A new design called Zero-Angle Memory (ZAM) stacks chips vertically instead of spreading them across a flat surface, a shift that could boost data transfer speeds while lowering power consumption.
Intel has emphasized this technology as a potential replacement for existing HBM memory.
Inside the nine-layer ZAM memory design
Technical diagrams from an upcoming VLSI conference have now revealed the internal details of this memory design.
Eight separate DRAM storage layers sit beneath a single control layer within each ZAM module built by the consortium.
This arrangement gives each module a total of nine functional layers stacked on top of each other vertically.
The images from the conference presentation show how each of the eight DRAM layers contains exactly 1.125 GB of storage capacity.
The basic math therefore delivers about 9GB of total memory per ZAM module before any overhead deductions.
Three Through-Silicon Vias (TSV) run through the entire vertical stack to electrically connect each layer from top to bottom.
Intel developed the fusion bonding method, which creates these TSV connections with extreme precision and reliability.
Each DRAM layer is separated from its neighbor by a silicon substrate that is only 3 microns thick.
These TSVs attach to either two or three metal rings on each layer for stable electrical flow.
Bandwidth estimates derived from previous claims now place ZAM close to HBM4 performance figures from Nvidia’s Vera Rubin platform.
ZAM targets bandwidth in the HBM4 class
A Japanese company called Saimemory Corporation is leading the commercialization effort for this Intel-backed technology.
Operating as a wholly owned subsidiary of SoftBank, Saimemory has yet to release official data rates for this new memory design.
Previous statements from the company suggested a two to three times faster speed compared to current HBM3 memory standards.
HBM3 currently delivers 819 GBps (or 6.4 Gbps) of bandwidth in its default configuration today – so tripling from that baseline would give ZAM around 2.5 TBps of total throughput for AI processors.
Nvidia’s Vera Rubin AI platform reportedly relies on HBM4 for its highest available bandwidth configurations.
This performance parity puts Intel’s HBM-killer memory technology in direct competition with Nvidia’s preferred memory standard.
Currently, no working prototype of the ZAM has yet been shown to independent reviewers or third-party test labs anywhere in the world.
Fabrication of eight bonded layers without introducing defects remains an unproven and difficult industrial challenge for this consortium.
HBM4 already benefits from Nvidia’s established manufacturing roadmap and existing global supply chains across multiple suppliers.
A memory standard with superior technical specifications often fails without broad ecosystem adoption and industry support over time.
The VLSI conference presentation in June will determine whether Intel’s HBM-killer claims move beyond paper diagrams to physical reality.
Via HPC WIRE
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds.



