Description
A critical challenge of high-luminosity Large Hadron Collider (HL-LHC), the next phase in LHC operation, is the increased computing requirements to process the experiment data. Coping with this demand with today’s computing model would exceed a realistic funding level by an order of magnitude. Many architectural, organizational and technical changes are being investigated to address this challenge. This talk describes the prototype of a WLCG data lake, a storage service of geographically distributed data centers connected by a low-latency network. The architecture of a EOS data lake is presented, showing how it leverages economy of scale to decrease cost. The talk discusses first experiences with the prototype and benchmark jobs reading data from the lake.
Primary author
Mr
Ivan Kadochnikov
(JINR)
Co-authors
Gavin McCance
(CERN)
Dr
Ian Bird
(CERN)
Jaroslava Schovancova
(CERN)
Maria Girone
(CERN)
Simone Campana
(CERN)
Xavier Espinal Currul
(CERN)