- SK Hynix ends HBM4 development and prepares technology for mass production
- Nvidia Rubin AI processors are expected to have SK Hynix’s new HBM4 chips
- HBM4 doubles connections and improves efficiency by 40 percent
South Korean memory giant SK Hynix says it has completed the development work on HBM4 and is now preparing the technology for mass production.
This puts it in front of Rivals Samsung and Micron, who still develop their own versions.
Chips are expected to be in Nvidia’s next generation of Rubin AI processors and will play an important role in driving future artificial intelligence workloads.
In addition to AI -Infrastructure Restrictions
Joohwan Cho, head of HBM development at SK Hynix, said: “Completion of HBM4 development will be a new milestone for industry. By delivering the product that meets the customer’s needs in performance, power efficiency and reliability in a timely manner, the company will meet time to market and maintain a competition position.”
High -band high -bandwelth or HBM stacks layers of dram vertically to speed up data transfer.
Demand for high bandwidth memory has increased with the increase in AI workloads, and energy efficiency has become an urgent concern for data center operators.
HBM4 doubles the number of I/O compounds compared to the previous generation and improves the effect by more than 40%.
The Korean memory producer says this is translated into higher flow and lower energy consumption in data centers with service gains of up to 69%.
HBM4 surpasses JEDEC Industry Standard operating speed of 8 Gbps by running over 10 Gbps.
SK Hynix adopted its advanced MRI-MUF process to stack chips, which improves heat drainage and stability along with $ 1 billion. Process technology to help minimize the risk of production.
Justin Kim, president and leader of AI-In from SK Hynix, said: “We reveal the establishment of the world’s first mass production system of HBM4. HBM4, a symbolic turning point beyond the AI-Infrastructure restrictions will be a core product to overcome technological challenges. diversificer til diversation til de krævede ai-ERa i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tim i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i en tid i A time for a time in a tim in a tim in a tim for a time in a memory products with the best quality and different quality way.



