Conveners
Workload Management Systems in Applied Research and BigData
- Alexei Klimentov (Brookhaven National Lab)
Workload Management Systems in Applied Research and BigData
- Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
Mrs
Evgenia Cheremisina
(Dubna International University of Nature, Society and Man. State Scientific Centre «VNIIgeosystem».)
02/10/2015, 14:00
For ensuring technological support of research and administrative activity in the sphere of environmental management was developed a specialized modular program complex. Its components provide realization of three main stages of any similar project:
- effective management of data and constructing of information and analytical systems of various complexity;
-complex analytical processing of...
Eygene Ryabinkin
(NRC "Kurchatov Institute")
02/10/2015, 14:20
An overview of Tier-1 operations during the beginning of LHC Run-2 will be presented. We will talk about three supported experiments, ALICE, ATLAS and LHCb: current status of resources and computing support, challenges, problems and solutions. Also we will give an overview of the wide-area networking situation and integration of our Tier-1 with regional Tier-2 centers.
Dr
Tatiana Strizh
(JINR)
02/10/2015, 14:40
Oral
An overview of the JINR Tier-1 centre for the CMS experiment at the LHC is given. A special emphasis is placed on the main tasks and services of the CMS Tier-1 at JINR. In February 2015 the JINR CMS Tier-1 resources were increased to the level that was outlined in JINR's rollout plan: CPU 2400 cores (28800 HEP-Spec06), 2.4 PB disks, and 5.0 PB tapes. The first results of Tier-1 operations...
Dr
Elena Tikhonenko
(JINR)
02/10/2015, 14:55
The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. Russia and Dubna Member States (RDMS) CMS collaboration was founded in the 1994 year. More than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) are involved in Russia and Dubna Member States (RDMS) CMS Collaboration. The RDMS CMS takes an...
Prof.
Alexander SHARMAZANASHVILI
(Georgian Technical University), Mr
Niko Tsutskiridze
(Georgian Technical University)
02/10/2015, 15:10
Data_vs_MonteCarlo discrepancy is one of the most important field of investigation for ATLAS simulation studies. There are several reasons of above mentioned discrepancies but primary interest is falling on geometry studies and investigation of how geometry descriptions of detector in simulation adequately representing “as-built” descriptions. Shapes consistency and detalization is not...
Ms
Victoriya Osipova
(Tomsk Polytechnic University, Tomsk, Russia)
02/10/2015, 15:40
The traditional relational databases (aka RDBMS) having been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data...
Ms
Maria Grigorieva
(National Research Center “Kurchatov Institute”)
02/10/2015, 15:55
Scientific computing in a field of High Energy and Nuclear Physics (HENP) produces vast volumes of data. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, daily runs up to 1.5 M jobs and submit them using PanDA workload management system....
Mr
Artem Petrosyan
(JINR)
02/10/2015, 16:30
PanDA (Production and Distributed Analysis System) is a workload management system, widely used for data processing at experiments on Large Hadron Collider (LHC) and others. COMPASS is a high-energy physics experiment at the Super Proton Synchrotron (SPS). Data processing for COMPASS historically runs locally at CERN, on lxbatch, the data itself stored in CASTOR. In 2014 an idea to start...
Dr
Andrea Favareto
(University and INFN Genova (Italy))
02/10/2015, 16:45
The ATLAS experiment collects billions of events per year of data-taking, and processes them to make them available for physics analysis in several different formats. An even larger amount of events is in addition simulated according to physics and detector models and then reconstructed and analysed to be compared to real events. The EventIndex is a catalogue of all events in each production...
Mr
Ignacio Barrientos Arias
(CERN)
02/10/2015, 17:00
The CERN IT Department provides configuration management services to LHC experiments and to the department itself for more than 17,000 physical and virtual machines in two data centres. The services are based on open-source technologies such as Puppet and Foreman. The presentation will give an overview of the current deployment, the issues observed during the last years, the solutions adopted,...
Mr
Serob Balyan
(Saint-Petersburg State University), Mr
Suren Abrahamyan
(Saint-Petersburg State University)
02/10/2015, 17:15
Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerate development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence,...
Valeriy Parubets
(National Research Tomsk Polytechnic University)
02/10/2015, 17:30
The work reviews a development of mathematical solution for modeling heterogeneous distributed data storages. There is a review of different approaches of modeling (Monte-Carlo, agent-based modeling). Performance analysis of systems based on commercial solutions of Oracle and freeware solutions (Cassandra, Hadoop) is provided. It's assumed that developed tool will help optimize data...