Speeding CERN LHC Research with HPC Systems and ALCF Workflow Optimizations
The amount of data processed at CERN's Large Hadron Collider (LHC) will grow significantly when CERN transitions to the High-Luminosity LHC, a facility upgrade being carried out now for operations planned in 2026. To help meet the LHC’s growing computing needs, scientists from the ATLAS experiment are working in conjunction with the Argonne Leadership Computing Facility (ALCF) to optimizing ATLAS simulations on the ALCF’s Intel-Cray supercomputer, Theta, to improve the processing efficiency on supercomputing resources. ATLAS scientists already use Globus to move files between Theta and NERSC's Cori supercomputer.
“Based on our best estimates, we’ll need about a factor of 10 increase in computing resources to handle the increased amount of data," said Taylor Childers, a computer scientist at Argonne National Laboratory and a member of the ATLAS experiment. “By enabling a portion of the LHC grid’s workload to run on ALCF supercomputers, we can speed the production of simulation results, which will accelerate our search for evidence of new particles,” Childers said.