- El Capitan is a classified US government property that crushes data related to the US nuclear arsenal
- ServeTheHome’s Patrick Kennedy was invited to the launch at LLNL in California
- AMD and HPE CEOs were also part of the ceremony
In November 2024, the AMD-powered El Capitan officially became the world’s fastest supercomputer, delivering 2.7 exaflops of peak performance and 1.7 exaflops of sustained performance.
Built by HPE for the National Nuclear Security Administration (NNSA) at Lawrence Livermore National Laboratory (LLNL) to simulate nuclear weapons testing, it is powered by AMD Instinct MI300A APUs and dethroned the previous leader, Frontier, pushing it down to second place among the most powerful supercomputers in the world.
Patrick Kennedy from ServeTheHome was recently invited to the launch event at LLNL in California, which also included the CEOs of AMD and HPE, and was allowed to bring his phone to capture “some pictures before El Capitan reaches its classified mission.”
Not the biggest
During the tour, Kennedy observed, “Each rack has 128 computer blades that are completely liquid cooled. It was very quiet on this system, with more noise from the storage and other systems on the floor.”
He then noted, “On the other side of the racks we have the HPE Slingshot connection that is wired with both DACs and optics.”
The Slingshot interconnect side of the El Capitan is – as you’d expect – liquid-cooled, with contact trays taking up only the bottom half of the space. LLNL explained to Kennedy that their codes don’t require full population, leaving the top half for the “Rabbit,” a liquid-cooled unit that houses 18 NVMe SSDs.
Looking into the system, Kennedy saw “a CPU that looks like an AMD EPYC 7003 Milan part, which feels about right considering the AMD MI300A’s generation. Unlike the APU, the Rabbit’s CPU had DIMMs and what looks like DDR4 memory that is liquid-cooled. Like the standard wings, everything is liquid-cooled, so there are no fans in the system.”
While El Capitan is less than half the size that the xAI Colossus cluster was in September, when Elon Musk’s supercomputer was equipped with “just” 100,000 Nvidia H100 GPUs (plans are afoot to expand it to a million GPU ‘s), Kennedy points out that “systems like this are still huge and run on a fraction of the budget of a 100,000 plus GPU system.”