Conveners
Plenary
- Vladimir Korenkov (JINR)
Plenary
- Tadeusz Kurtyka (CERN)
Plenary
- Markus Schulz
Plenary
- Alexei Klimentov (Brookhaven National Lab)
Plenary
- Alexei Klimentov (Brookhaven National Lab)
Plenary
- Patrick Fuhrmann (DESY)
Plenary
- Ivan Vankov (Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences)
Plenary
- Vladimir Korenkov (JINR)
Dr
Markus schulz
(on behalf of WLCG)
25/09/2017, 11:00
Plenary
The LHC science program has utilized WLCG, a globally federated computing infrastructure, for the last 10 years to enable its ~10k scientists to publish more than 1000 physics publications in peer reviewed journals. This infrastructure has grown to provide ~750k cores, 400 PB disk space, 600 PB of archival storage, as well as high capacity networks to connect all of these.
Taking 2016 as a...
Dr
Kenneth Herner
(Fermi National Accelerator Laboratory)
25/09/2017, 12:30
Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.)
Plenary
The FabrIc for Frontier Experiments (FIFE) project is a
major initiative within the Fermilab Scientific Computing Division
designed to steer the computing model for non-LHC experiments at Fermilab. The FIFE project enables close collaboration between experimenters and
computing professionals to serve high-energy physics experiments of differing scope and physics area of study. The project...
Dr
Weidong Li
(IHEP, Beijing)
25/09/2017, 14:30
Distributed Computing. GRID & Cloud Computing
Plenary
The distributed computing system at Institute of High Energy Physics (IHEP), Chinese Academy of Sciences, was firstly built based on DIRAC in 2013 and put into production in 2014. This presentation will introduce the development and latest status of this system: the DIRAC-based WMS was extended to support multi-VO scheduling based on VOMS; the general-purpose task submission and management...
Dr
Jack Wells
(Oak Ridge National Laboratory)
25/09/2017, 15:00
Plenary
Over its many-decade history, nuclear and particle physics research has been a driver for advances in high-performance computing (HPC) and has come to view HPC as an essential scientific capability. Indeed, the dawn of the twenty-first century has witnessed the widespread adoption of HPC as an essential tool in the modeling and simulation of complex scientific phenomena. And today, in 2017,...
Dr
Sergey Sidorchuk
(FLNR JINR)
25/09/2017, 15:30
Research Data Infrastructures
Plenary
The development of the experimental base of the Flerov Laboratory (JINR, Dubna) assumed for the forthcoming 7-year period includes two principal directions. The first one implies the study of physical and chemical properties of nuclei in the vicinity of the so called “Stability Island”. This activity will be developed mainly on the base of the Super Heavy Elements (SHE) Factory. The factory,...
Mr
Mikhail Borodin
(The University of Iowa (US))
25/09/2017, 16:20
Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.)
Plenary
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS-specific workflows, across more than a hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based upon many criteria, such as input and output size, memory requirements and...
Sarah Demers
(CERN)
25/09/2017, 16:50
Triggering, Data Acquisition, Control Systems
Plenary
By 2026 the High Luminosity LHC will be able to deliver to its experiments at CERN 14 TeV proton-proton collisions with an order of magnitude higher instantaneous luminosity than the original design, at the expected value of 7.5 × 10^34 cm−2s−1. The ATLAS experiment is planning a series of upgrades to prepare for this new and challenging environment, which will produce much higher data rates...
Dr
Patrick Fuhrmann
(DESY)
27/09/2017, 09:00
Distributed Computing. GRID & Cloud Computing
Plenary
When preparing the Data Management Plan for larger scientific endeavours, PI’s have to balance between the most appropriate qualities of storage space along the line of the planned data lifecycle, it’s price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as...
Mr
Nikita Belyaev
(NRC "Kurchatov Institute")
27/09/2017, 09:30
Distributed Computing. GRID & Cloud Computing
Plenary
The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of...
Dr
Andrei Tsaregorodtsev
(CPPM-IN2P3-CNRS)
27/09/2017, 10:00
Dr
Mohammad Al-Turany
(GSI/CERN)
27/09/2017, 10:30
Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.)
Plenary
ALFA is a message queue based framework for online/offline reconstruction. The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of this framework. Each process in ALFA assumes limited communication and reliance on other processes. Moreover, it does not dictate any application protocols but supports different serialization standards...
Mr
Levente Hajdu
(BNL)
27/09/2017, 11:30
Plenary
STAR's RHIC computing facility provides over 15K dedicated slots for
data reconstruction. However this number of slots is not always
sufficient to satisfy an ambitious and data challenging Physics program
and harvesting resources from outside facilities is paramount to
scientific success. However, constraints of remote sites (CPU time
limit) do not always always provide the...
Dr
Oleg Rogachevskiy
(JINR)
27/09/2017, 12:00
Detector & Nuclear Electronics
Plenary
The study of the heavy ion collisions is of the great interest in high
energy physics due to expected phase transition from the nucleons to the
quark gluon plasma. But to have a full picture of the effect there is a
lack of experimental data at low energy region for the
nuclei-nucleus collisions. The goal of the NICA project at JINR is to
cover the collision energy range from 2...
Prof.
Dario Barberis
(University and INFN Genova (Italy))
28/09/2017, 09:00
Non-relational Databases and Heterogeneous Repositories
Sectional
Structured data storage technologies evolve very rapidly in the IT world. LHC experiments, and ATLAS in particular, try to select and use these technologies balancing the performance for a given set of use cases with the availability, ease of use and of getting support, and stability of the product. We definitely and definitively moved from the “one fits all” (or “all has to fit into one”)...
Mr
Robert Wolff
(CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
28/09/2017, 09:30
Plenary
The upgrade of the Large Hadron Collider (LHC) scheduled for the shut-down period of 2018-2019 (Phase-I upgrade), will increase the instantaneous luminosity to about three times the design value. Since the current ATLAS trigger system does not allow a corresponding increase of the trigger rate, an improvement of the trigger system is required.
The new trigger signals from the ATLAS Liquid...
Francesco Tartarelli
(Università degli Studi e INFN Milano)
28/09/2017, 10:00
Detector & Nuclear Electronics
Plenary
This presentation will show the status of the upgrade projects of the ATLAS calorimeter system for the high luminosity phase of the LHC (HL-LHC). For the HL-LHC, the instantaneous luminosity is expected to increase up to L ≃ 7.5 × 10^{34} cm^{−2} s^{−1} and the average pile-up up to 200 interactions per bunch crossing.
The Liquid Argon (LAr) calorimeter electronics will need to be replaced...
Mr
Mikel Eukeni Pozo Astigarraga
(CERN)
28/09/2017, 10:30
Triggering, Data Acquisition, Control Systems
Plenary
The LHC has been providing proton-proton collisions with record intensity and energy since the start of Run 2 in 2015. In the ATLAS experiment the Data Acquisition is responsible for the transport and storage of the more complex event data at higher rates that the new collision environment implies. Data from events selected by the first level hardware trigger are subject to further filtration...
Prof.
Gennady Ososkov
(Joint Institute for Nuclear Research)
29/09/2017, 10:00
Machine Learning Algorithms and Big Data Analytics
Sectional
A fundamental problem of data processing for high energy and nuclear physics (HENP) experiments is the event reconstruction. The main part of it is finding tracks among a great number of so-called hits produced on sequential co-ordinate planes of tracking detectors. The track recognition problem consists in joining these hits into clusters, each of them joins all hits belonging to the same...
Mr
Mikhail Titov
(National Research Centre «Kurchatov Institute»)
29/09/2017, 10:20
Machine Learning Algorithms and Big Data Analytics
Sectional
The workflow management process should be under the control of the certain service that is able to forecast the processing time dynamically according to the status of the processing environment and workflow itself, and to react immediately on any abnormal behaviour of the execution process. Such situational awareness analytic service would provide the possibility to monitor the execution...
Mr
Petr Jancik
(JINR; VSB - Technical University of Ostrava)
29/09/2017, 10:40
Machine Learning Algorithms and Big Data Analytics
Sectional
The IT in the nuclear research has been focused mainly on mathematical modelling of the nuclear phenomena and on big data analyses. The applied nuclear sciences used for the environmental research brings in a different set of problems where information technologies may significantly improve the research. The ICP Vegetation is an international research program investigating the impacts of air...