In 1965, the United States and other nuclear powers committed to the Comprehensive Nuclear-Test-Ban Treaty, which prohibited nuclear tests. The National Nuclear Security Administration (NNSA), a successor to the Manhattan Project, now tests nukes only in simulation. To that end, the NNSA yesterday unveiled the world’s fastest supercomputer to air in its mission to maintain a safe, secure, and reliable nuclear stockpile.
El Capitan was announced yesterday at the SC Conference for supercomputing in Atlanta, Georgia, and it debuted at #1 in the newest Top500 list, a twice-yearly ranking of the world’s highest performing supercomputers. El Capitan, housed at Lawrence Livermore National Laboratory in Livermore, Calif., can perform over 2700 quadrillion operations per second at its peak. The previous record holder, Frontier, could do just over 2000 quadrillion peak operations per second.
Alongside El Capitan, the NNSA announced its unclassified cousin, Toulumne, which debuted at #10 on the Top500 list and can perform a peak of 288 quadrillion operations per second.
The NNSA—which oversees Lawrence Livermore as well as Los Alamos National Laboratory and Sandia National Laboratories—plans to use El Capitan to “model and predict nuclear weapon performance, aging effects, and safety,” says Corey Hinderstein, acting principal deputy administrator at NNSA. Hinderstein says the 3D modeling of multiple physics processes will be significantly enhanced by the new supercomputer’s speed. The team also plans to use El Capitan to aid in its inertial confinement fusion efforts, as well as to train artificial intelligence in support of both of those efforts.
Planning for El Capitan began in 2018, and construction has been ongoing for the past four years. The system is built by Hewlett Packard Enterprise, which has built all of the current top 3 supercomputers on the Top500 list. El Capitan uses AMD’s MI300a chip, dubbed an accelerated processing unit, which combines a CPU and GPU in one package. In total, the system boasts 44,544 MI300As, connected together by HPE’s Slingshot interconnects.
Scientists are already at work porting their code over to the new machine, and they are enthusiastic about its promise. “We’re seeing significant speed ups compared to running on old chips versus this new thing,” says Luc Peterson, computational physicist at Lawrence Livermore National Laboratory. “We are at the point where our time to science is shrinking. We can do things in a few days that would have taken a few months. So we’re pretty excited about the applications.”
Yet the appetite for ever larger supercomputers lives on. “We are already working on the next [high performance computing] acquisition,” says Thuc Hoang, director of the advanced simulation and computing program at NNSA.
From Your Site Articles
Related Articles Around the Web