Speaker
Dr
Markus schulz
(on behalf of WLCG)
Description
The LHC science program has utilized WLCG, a globally federated computing infrastructure, for the last 10 years to enable its ~10k scientists to publish more than 1000 physics publications in peer reviewed journals. This infrastructure has grown to provide ~750k cores, 400 PB disk space, 600 PB of archival storage, as well as high capacity networks to connect all of these.
Taking 2016 as a reference, the community processed roughly 10 Trillion collision events, often requiring multiple runs across parts of the primary data.
Naïve projections from current practice to the HL-LHC data volumes, taking into account Moore’s law cost reductions of 10-20% per year, predict that computing hardware needs will exceed a flat hardware budget scenario by a factor 10-25. To achieve an efficiency gain at such a scale the community is rethinking the overall LHC computing models. These also have to enable the efficient use of new technologies and take into account the changes in the way computing resources can be provisioned.
The presentation will cover the evolution of WLCGs and the current status of the discussion of future computing models.
Primary author
Dr
Markus schulz
(on behalf of WLCG)