Conveners
Computing for Large Scale Accelerator Facilities (LHC, FAIR, NICA, etc.) and Big Data
- Julia Andreeva (CERN)
Computing for Large Scale Accelerator Facilities (LHC, FAIR, NICA, etc.) and Big Data
- Dario Barberis (University and INFN Genova (Italy))
Dr
Mohammad Al-Turany
(GSI/CERN)
02/10/2015, 09:00
The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of a common software framework in an experiment independent way; ALFA (ALICE-FAIR framework). ALFA is designed for high quality parallel data processing and reconstruction on heterogeneous computing systems. It provides a data transport layer and the capability to coordinate...
Dr
Ilija Vukotic
(University of Chicago)
02/10/2015, 09:30
The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system,...
Dr
Alexei Klimentov
(Brookhaven National Lab)
02/10/2015, 10:00
Abstract. The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled...
Mr
Mikhail Borodin
(NRNU MEPHI, NRC KI)
02/10/2015, 10:30
The data processing and simulation needs of the ATLAS experiment at LHC grow continuously, as more data are collected and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of...
Dr
Patrick Fuhrmann
(DESY)
02/10/2015, 11:10
The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation.
Beside legal issues, those two...
Eygene Ryabinkin
(NRC "Kurchatov Institute")
02/10/2015, 11:40
The review of current status and the Program for future developments of data intensive high performance/high throughput computing complex for mega-science in NRC "Kurchatov Institute", supporting the Priority scientific task “Development of mathematical models, algorithms and software for systems with extramassive parallelism for pilot science and technical areas” is presented. Major upgrades...
Prof.
Alexander Degtyarev
(Professor)
02/10/2015, 12:10
Dealing with large volumes of data is tedious work which is often delegated to a computer, and more and more often this task is delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some...