Speaker
Dirk Duellmann
(CERN)
Description
CERN provides a significant part of the storage and cpu resources used for LHC analysis and is, similar to many other WLCG sites, preparing for a significant requirement increase in LHC run 3.
In this context, an analysis working group has been formed at CERN IT with the goal to enhance science throughput by increasing the efficiency of storage and cpu services via a systematic statistical analysis of operational metrics. Starting from a more quantitative understanding of the use of available IT resources, we aim to support a joined optimisation with the LHC experiments and the joined planning of upcoming investments.
I this talk we will describe the Hadoop based infrastructure used for preprocessing medium and long term (1 - 48 months) metric collections, some of the various tools used for aggregate performance analysis and prediction and we will conclude with some results obtained with this new infrastructure.
Primary author
Dirk Duellmann
(CERN)