Prof. Victor Matveev (JINR)
Dr Livio Mapelli (CERN)
Although the flagship of CERN physics is the Large Hadron Collider (LHC), the CERN scientific programme is varied and diversified. It extends to low-energy nuclear physics, antiproton experimentation and fixed target experiments at intermediate energies. After the Higgs discovery in 2012, an intense activity has started to prepare for the future. While the high priority still remains the LHC...
Dr Christoph Schaefer (CERN) , Dr Tadeusz Kurtyka (CERN)
Ian Bird (CERN)
The Worldwide LHC Computing Grid (WLCG) has been in production for more than 10 years supporting the preparations for, and then the first run of the LHC. It has shown itself to be one of the pillars of the infrastructure necessary to enable the rapid production of physics results from the LHC, and has been in constant use at a very high load since its first introduction. However, even from...
Dr Massimo Lamanna (CERN)
CERN IT operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. Managed disk-storage amounts to about 100 PB (with relative ratios 1:10:30). EOS deploys disk resources evenly across the two CERN computer centres (Meyrin and Wigner). The physics data archive (CASTOR) contains about 100 PB so far. We are also providing sizeable...
Dr Vladimir Korenkov (JINR)
The report introduces the status and evolution of the information technologies at JINR. The objective of Laboratory of Information Technologies activity is to provide a further development of the JINR network and information infrastructure asked by the research and production activity of JINR and its Member States using the most advanced information technologies. The existing Central...
Dr Dmitry Peshekhonov (JINR)
The scientific program and current status of the project NICA realization is presented in the report. A new scientific project NICA (the Nuclotron based Ion Collider facility) is now under the preparation at the Joint Institute for Nuclear Research (JINR) in Dubna. The project is aimed at two scientific programs: the study of the hot and dense baryonic matter under extreme conditions and at...
121. Virtualization of computations - new approaches and technologies: from data storage systems to desktops
Mr Alexander Paramonov (Candidate of technical Science, MBA, ACC)
Dr Alexey Struchenko (Jet Infosystems)
Dmitry Garanov (Niagara, Moscow)
Dr Lubomir Dimitrov (Institute for Nuclear Research and Nuclear Energy)
The higher energy and luminosity of future High Luminosity (HL) LHC, determines a significant increasing of the radiation background around the CMS subdetectors, and especially in the higher pseudorapidity region. Under such heavy conditions, the RPC (used in muon trigger) most probably could not operate effectively. A possible better solution is the so-called GEM (Gas Electron Multiplier)...
Dr Oleg Strekalovsky (JINR)
High speed switched capacitor waveform digitizers are increasingly used in studies of rare events in nuclear physics. Digitizers complement the classic analog input systems or completely replace it. To launch the start of registration required trigger signal that determines an interesting event. Discriminator's threshold levels are set individually via USB 2.0. Trigger signal generating...
64. Status of the Front-end-Electronics based on the NINO ASIC for the Time-of-Flight measurements in the MPD
Mr Mikhail Buryakov (JINR, LHEP)
A conceptual design of the MultiPurpose Detector (MPD) is proposed for a study of hot and dense baryonic matter in collisions of heavy ions over the atomic mass range A = 1–197 at a centre-of-mass energy up to √(s_NN ) = 11 GeV (for Au79+). The MPD experiment is foreseen to be carried out at a future JINR accelerator complex facility for heavy ions – the Nuclotron-based Ion Collider fAcility...
37. Magnetic measurement system for series production of NICA superconducting magnets. Data acquisition, control and data analysis.
Mr Vladimir Borisov (JINR)
The Nuclotron-based Ion Collider fAcility (NICA) is the new accelerator complex being constructed at JINR. More than 250 superconducting (SC) magnets will be assembled and tested at the new test facility in the Laboratory of High Energy Physics JINR. Magnetic measurements system for NICA booster dipole magnets was built and commissioned at late 2013. First cryogenic measurements of ...
Aleksey Kuznetsov (JINR)
There have been developed some setups for super heavy elements synthesis in FLNR including multi-detector spectrometers of nuclear reaction products. These setups are VASSILISSA, DGFRS (Dubna Gas Filled Recoil Separator), MASHA etc. The number of channels in such spectrometers is growing up continuously and now is about several hundreds. Electronics for such spectrometers should be...
Mr Stefan Motycak (JINR)
New beam diagnostic system, based on PXI standard, was developed, tested and used in the experiment for MASHA setup. The beam energy and beam current measurement is realized using a few different methods. Online time of flight energy measurement was done using three pick-up detectors. The distance between the first pair of detectors was 2 meters and between the second pair of detectors 11...
Dr NIKOLAY GORBUNOV (JINR)
The purpose of the TUS space experiment is to study cosmic rays of ultrahigh energies by registration of the generated extensive air showers using a satellite in space. The concentrator located on the satellite is made in the form of a Fresnel mirror directed toward the earth's atmosphere, and at its focus there is a photodetector. The angle of view of the mirror is ± 50, that for the set...
Mr Evgeny Gorbachev (JINR)
Nuclotron is a 6 GeV/n superconducting proton synchrotron operating at JINR, Dubna since 1993. It will be the core of the future accelerating complex NICA which is under construction now. The TANGO based control system of the accelerating complex is under development now. The report describes its structure, main features and present status.
Dr Maxim Karetnikov (All-Russia Research Institute of Automatics)
At the T(d,n)He4 reaction each 14 MeV neutron is accompanied (tagged) by 3.5 MeV alpha- particle emitted in the opposite direction. A position- and time-sensitive alpha-detector measures time and coordinates of the associated alpha particle which allows determining time and direction of neutron escape. A spectrum of gamma-rays emitted at the interaction of tagged neutrons with nuclei of...
Mr Yury Tsyganov (JINR)
With reaching to extremely high intensities of heavy-ion beams new requirements for the detection system of the Dubna Gas-Filled Recoil Separator (DGFRS) will definitely be set. One of the challenges is how to apply the “active correlations” method [1-5] to suppress beam associated background products without significant losses in the whole long-term experiment efficiency value. Different...
Mr Dmitrii Monakhov (JINR)
Betatron tune is one of the important beam parameters that must be known and controlled to avoid the beam instability of the circular particle accelerator. A real-time method for betatron tune measurements at Nuclotron and NICA Booster was developed and tested. A bandlimited noise source and chirp (frequency sweep) was used for beam excitation. The transversal beam oscillation signals were...
Mr Alexey Voinov (JINR)
The new series of the experiments aimed at the synthesis and decay properties studying both the most neutron-deficient isotopes of element Fl (Z = 114) and the heaviest isotopes of 118 element have being planned at the DGFRS (FLNR JINR). An appropriate registering system should be implemented to serve spectrometric data coming from the full absorption double-sided silicon strip detector...
15. DeLiDAQ-2D ─ a new data acquisition system for position-sensitive neutron detectors with delay-line readout
Ms Svetlana Murashkevich (JINR)
Frank laboratory of Neutron Physics, Joint Institute for Nuclear Research, Dubna, Russia Software for a data acquisition system of modern one- and two-dimensional position-sensitive detectors with delay-line readout, which includes a software interface to a new electronic module DeLiDAQ-2D with a USB interface, is presented. The new system after successful tests on the stand and on...
Mr Georgy Sedykh (JINR)
Precise temperature control in various parts of the magnet and thermostat is one of the vital problems during cryogenic tests. The report describes design of the thermometry system, developed in LHEP JINR. Hardware consists of resistance temperature detectors of TVO and PT100 types, precision current sources and multi channel high resolution acquisition devices from National Instruments....
Aleksey Novoselov (JINR)
One of the significant changes during last years at mass spectrometer MASHA (Mass-Analyzer of Super Heavy Atoms), located at JINR Flerov Laboratory of Nuclear Reactions, was upgrade of the data acquisition system. The main difference from previous CAMAC DAQ is in using new modern platform – National Instruments PXI with XIA multichannel high speed digitizers (250MHz 12 bit 16 channels). There...
Mr Vasily Andreev (VBLHEP JINR)
TANGO Controls is a basis of the NICA control system. The report describes the software that integrates the Nuclotron beam slow extraction subsystem into the TANGO system of NICA. Object of control are resonance lenses power supplies and the extracted beam spill controller. The software consists of the subsystem device server, remote client and web-module for the subsystem data viewing. The...
Mr Dmitriy Ponkin (LHEP JINR)
The work is devoted to study and development of the ESIS KRION-6T beam emittance measurement device using sectional ion collector method. In the course of the work, the charge measurement possibility using multichannel ADC with current input was researched. MCU-based data acquisition system was designed. The system tests was carried out.
Mr Ilya Shirikov (-)
The report describes the features of the development and creation of main oscillator to the high frequency linear accelerator of NICA system. In the report will be: - Presented the principles of construction of the five-chanalled precision generator with signal frequency and phase automatic adjustment. - Examined the principles of frequency adjustments at HiLac resonators - Reported about...
Mr Ivan Filippov (JINR)
Every experiment has its own software. Current report describes DAQ software of BM@N experiment: • Run Control - program that controls (configure, prepare, start/stop) 'Run' execution. • First Level Processor - layer that control data flow from Detector Readout Electronics(DRE), checking and formatting it. • Event-Building systems - buffering data flow, sorting sub-events and...
Dr Victor Zamriy (JINR)
The talk discusses development of the host-based systems for carrying out measurements and data acquisition to control a great number of pulse parameters and pulsed facilities of accelerators. We consider possible modes of timing and allocation of measuring operations and storage, processing and the data output for groups of channels, or tasks. The time period or intensity of operations and...
Andrey Yudin (Vladimirovich)
This article is presenting software and hardware parts of the automatization project of control channel 8 lenses focusing of Phasotron at DLNP of JINR. The article describes goals, concepts and features of the software, developed with Python and QT.
Mr Victor Rogov (JINR)
The report is focuses on the development of L0 Trigger Unit for BM@N setup.The L0 Trigger Unit (T0U) generates trigger signal based on beam line and target area detector signals. This module also provides both control and monitoring of the detector front-end electronics power supplies. T0U was successfully tested during the BM@N test run with Nuclotron beam in February-March 2015.
Mr Ivan Slepov (JINR)
Mr Andrey Terletskiy (JINR)
The report describes structure of data acquisition electronics at BM@N. There will be three main parts related to each other. The first one is a short description of electronic modules, their technical characteristics, functionality and detectors with which they were used. The second one describes synchronization method, which was used. In particular, the White Rabbit protocol and its...
Mr Dmitry Egorov (JINR)
Big modern physics experiments represent a collaboration of workgroups and require wide variety of different electronic equipment. Besides trigger electronics or Data acquisition system (DAQ), there is a hardware that is not time-critical, and can be run at a low priority. Slow Control system are used for setup and monitoring such hardware. Slow Control systems in a typical experiment are...
Lidija Zivkovic (Institute of Physics Belgrade, Belgrade, Serbia)
In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal...
Lee Sawyer (Louisiana Tech University, UK)
The new centre of mass energy and high luminosity conditions during Run 2 of the Large Hadron Collider impose ever more demanding constraints on the ATLAS online trigger reconstruction and selection system. To cope with these conditions, the hardware-based Level-1 trigger now includes a Topological Processor and the software-based High Level Trigger has been redesigned, merging the two...
Ryan White (Universidad Técnica Federico Santa María, Valparaíso, Chile)
Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs particle. Dedicated triggers are...
Dr Yang Qin (University of Manchester, UK)
The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm with the early LHC Run 2 data are discussed. During the 2013-15 LHC shutdown, the HLT farm was redesigned to run in a single HLT stage, rather than the two-stage (Level 2 and Event Filter) used in Run 1. This allowed a redesign of the HLT ID tracking...
Needa Asbah (DESY, Hamburg, Germany)
The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10^34 cm^-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC is restarting in 2015 with much higher instantaneous luminosity and this will increase the load on High Level Trigger system, the...
Mr Tatsuya Mori (The University of Tokyo)
The Large Hadron Collider (LHC) is foreseen to be upgraded during the shut-down period of 2018-2019 to deliver about 3 times the instantaneous design luminosity. Since the ATLAS trigger system, at that time, will not allow an increase of the trigger rate an improvement of the trigger system is required. The ATLAS LAr Calorimeter read-out will therefore be modified and digital trigger signals...
Prof. Dario Barberis (University and INFN Genova (Italy))
The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid...
Mr Konstantin Gertsenberger (JINR)
Today the use of databases is a prerequisite for qualitative management and unified access to the data of modern high-energy physics experiments. The developed database describing in this report is designed as comprehensive data storage for the ongoing sessions of the fixed target experiment BM@N at the Joint Institute for Nuclear Research. The structure and purposes of the BM@N facility will...
Mr Thurein Kyaw (Lwin)
The problem of big data is becoming increasingly important in our time. This is the most effective solution for the storage and data processing. Lately there is an interest for the same approach for data supply in large scale computations. It is based on the use of parallel database management systems (DBMS), providing parallel processing on requests of Distributed computing systems. There...
Maksim Bashashin (JINR)
The science projects growth changing the criteria for their efficiency due to project implementation be in need of not only level increasing of the management specialization but also pose the problem of choosing the effective planning methods, deadlines monitoring and participants interaction involved in research projects. This paper is devoted to the choosing of project management information...
Irina Filozova (JINR)
The article presents the vision of JINR Corporate Information System (JINR CIS): analysis of the current situation, goals and objectives, business requirements, functional requirements, the system structure, assumptions and dependencies and other factors. The special attention is given to the information support of scientific research - Current Research Information Systems as part of the...
Mr SEBASTIAN BUKOWIEC (CERN)
The continuous growth of luminosity in high energy physics with the LHC restart in 2015 results in larger amount of data to be analysed and a corresponding increase in computing power. Given these challenges, we have adopted a number of open source projects used by other large scale deployments elsewhere and contributed to those communities. In particular, OpenStack was chosen as the...
Prof. Alexander Bogdanov (St.Petersburg State University)
To have computing power of large system in hand was a dream of computational scientists for a long time. There were a lot of very interesting proposals in that direction, but there always were bottlenecks, that managed to ruin the original idea. We review some of those problems and argue that new technologies can bring solutions at least to majority of them. The use of cloud technologies...
Ms Julia Andreeva (CERN)
Monitoring the WLCG infrastructure requires to gather and to analyze high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve...
Prof. Gennady Ososkov (JINR)
The simulation concept for grid-cloud services of contemporary HENP experiments of the Big Data scale was formulated in practicing the simulation system developed in LIT JINR Dubna. This system is intended to improve the efficiency of the design and development of a wide class of grid-cloud structures by using work quality indicators of some real system to design and predict its evolution. For...
Dr Alexei Klimentov (Brookhaven National Lab) , Mr Dimitrii Krasnopevtsev (National Research Nuclear University MEPhI (RU))
After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going...
Andreas-Joachim Peters (CERN)
The EOS project at CERN is providing large scale storage systems to LHC experiments and many other projects at CERN and beyond. In order to further increase the scalability and availability of the system we are investigating several new technologies such as ethernet connected disk drives and non-volatile memory implementations to further decrease the cost of ownership and the downtime after...
Dr Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
Multiple research user communities need to put in common infrastructures their computing resources in order to boost the efficiency of their usage. Various grid infrastructures are trying to help the new users to start doing computations by providing services facilitating access to distributed computing resources. The DIRAC project is providing software for creating and operating such...
Mr Jan Kundrát (Institute of Physics of the AS CR and CESNET)
Dr Petr Zrelov (LIT JINR)
The paper reviews the present status and the perspectives of development of the heterogeneous computing cluster HybriLIT (http://hybrilit.jinr.ru/) which was put into operation in 2014 at the Laboratory of Information Technologies of JINR. HybriLIT provides possibilities to carry out high performance computing within the Multifunctional Information and Computing Complex in LIT JINR. The...
Mr Nichita Degteariov (RENAM)
In recent years distributed information processing and high-performance computing (HPC, distributed Cloud and Grid computing infrastructures) technologies for solving complex tasks with high demands of computing resources are actively developing. In Moldova the works on creation of high-performance and distributed computing infrastructures were started relatively recently due to participation...
Mr Vitaly Yermolchyk (NC PHEP BSU)
Status of the NC PHEP BSU Tier 3 site presented. Transition to rackmounted servers started. Due to need in more scalable, reliable platform which provide efficient resource utilization tier infrastructure was ported on cloud with distributed storage. The choise and setup of cloud is discussed.
Ms Nataliia Kulabukhova (Saint Petersburg State University)
In this work by saying Virtual Accelerator we mean a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators using distributed computing resources. The main use of the Virtual Accelerator is simulation of beam dynamics by different packages with the opportunity to match the and the possibility to create pipelines of tasks...
Furano Fabrizio (CERN IT/SDC)
The Dynamic Federations project ("dynafed") enables the deployment of scalable, distributed storage systems composed of independent storage endpoints. While the Uniform Generic Redirector at the heart of the project is protocol agnostic, we have focussed our effort on HTTP-based protocols, including S3 and WebDAV. The system has been deployed on testbeds covering the majority of ...
59. APPLICATION OF CLUSTER ANALYSIS AND AUTOREGRESSIVE NEURAL NETWORKS FOR THE NOISE DIAGNOSTICS OF THE IBR-2M REACTOR
Dr Yuri Pepelyshev (JINR)
The pattern recognition methodologies and, artificial neural networks were used widely for the reactor noise diagnostics. It’s very important for pulsed reactor of periodic operation IBR-2M (Dubna, Russia), which is a high sensitivity to reactivity fluctuations (40 times higher than stationary reactors with a uranium fuel). The cluster analysis allows a detailed study of the structure and...
Dr Nikolay Kutovskiy (JINR)
To fulfill JINR commitments in different national and international projects related to modern information technologies usage such as cloud and grid computing as well as to provide the same tools for JINR users for their scientific research the cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen...
Mr Roman Semenov (JINR)
This article describe the questions of construction the distributed storage system and options for using it. As a means of creating such storage systems were studied Ceph (FS), GlusterFS, MooseFS, LizardFS. As a result of the analysis system was chosen which is currently used as a storage at JINR cloud service. It also provides options to access cloud storage and how to implement them.
Mr Andrei Ivashchenko (St.Petersburg State University)
This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document...
91. Impact of Configuration Management system of computer center on support of scientific projects throughout their lifecycle
Mr Nikolai Iuzhanin (SPbSU)
In this article the problem of support of scientific projects in the computer center is considered throughout their lifecycle and in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of computer center. In view of the strong integration of IT infrastructure components with the use of virtualization,...
Mr Nikita Balashov (JINR)
The variability of workloads experienced by modern user applications leads to uneven distribution of workloads across physical resources and ineffective hardware utilization of cloud data centers. Some ways to solve this problem are reviewed and a need to develop algorithms to optimize hardware utilization of clouds is shown. As an example of one of the promising approaches a smart algorithm...
Mr Igor Pelevanyuk (JINR)
The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- anihilation in the energy range from 2.0 till 4.6 GeV. The world largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. Expected increase of the data volume in the coming years required significant evolution of the...
Mr Ivan Gankevich (Saint Petersburg State University)
Efficient distribution of high performance computing resources according to actual application needs along with comfortable and transparent access to these resources has been an open question since HPC technologies became widely introduced. One of the application classes that require such functionality are physics applications. In this paper we discuss issues and approaches to manage resources...
Mr Evgeny Boger (JINR)
BEAN is a lightweight ROOT-based analysis-only framework designed for BES-III experiment. A number of approaches to parallel computing are used in BEAN: batch systems, ROOT PROOF and Apache Hadoop. The latter is particularly interesting for particle physics applications being a new de facto standard in parallel computing. We present here the implementation details of PROOF and Hadoop support...
Mr Dmitry Guschansky (St.Petersburg State University)
Modern information technologies have an impact in research in all possible areas of knowledge, and the humanities are not an exception. Some of them, such as psychology and sociology, can use observations of the human behavior and the opinions of individuals and communities as a base for research. One of the possible ways to acquire the data for the base is from social networking services,...
132. Development of cross-platform communication library in C++, with support for multiple scripting languages: architectural pitfalls
Mr Oleg Iakushkin (Saint Petersburg State University)
Dr Sergey Manoshin (FLNP JINR)
At present days practically each new neutron spectrometer before construction or modernization is simulated, and its parameters are optimized with use of calculations on fast modern computers. In several leading world neutron centers development new and support of old program packages (MCSTAS, VITESS, RESTRAX, NISP) with use of a method of Monte Carlo is conducted. In FLNP modules for...
Dr Charalampos Kouzinopoulos (CERN)
The Hough Transform algorithm is a popular image analysis method that is widely used to perform global pattern recognition in images through the identification of local patterns in a suitably chosen parameter space. The algorithm can be also used to perform track reconstruction; to estimate the trajectory of individual particles when passed through the sensitive elements of a detector volume....
Dr Alexander Kryukov (SINP MSU)
Dr Mohammad Al-Turany (GSI/CERN)
The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of a common software framework in an experiment independent way; ALFA (ALICE-FAIR framework). ALFA is designed for high quality parallel data processing and reconstruction on heterogeneous computing systems. It provides a data transport layer and the capability to coordinate...
Dr Ilija Vukotic (University of Chicago)
The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system,...
Dr Alexei Klimentov (Brookhaven National Lab)
Abstract. The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled...
Mr Mikhail Borodin (NRNU MEPHI, NRC KI)
The data processing and simulation needs of the ATLAS experiment at LHC grow continuously, as more data are collected and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of...
Dr Patrick Fuhrmann (DESY)
The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two...
Eygene Ryabinkin (NRC "Kurchatov Institute")
The review of current status and the Program for future developments of data intensive high performance/high throughput computing complex for mega-science in NRC "Kurchatov Institute", supporting the Priority scientific task “Development of mathematical models, algorithms and software for systems with extramassive parallelism for pilot science and technical areas” is presented. Major upgrades...
Prof. Alexander Degtyarev (Professor)
Dealing with large volumes of data is tedious work which is often delegated to a computer, and more and more often this task is delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some...
Prof. Yury Panebrattsev (JINR)
Modern education assumes significantly expand cooperation of universities with leading scientific centers for the training of highly qualified specialists. This report focuses on the MEhI and JINR joint project for the STAR experiment at RHIC (Brookhaven National Laboratory). The STAR experiment is one of the leading international collaboration in the field of modern nuclear physics. Many...
Mrs Evgenia Cheremisina (Dubna International University of Nature, Society and Man. State Scientific Centre «VNIIgeosystem».)
For ensuring technological support of research and administrative activity in the sphere of environmental management was developed a specialized modular program complex. Its components provide realization of three main stages of any similar project: - effective management of data and constructing of information and analytical systems of various complexity; -complex analytical processing of...
Eygene Ryabinkin (NRC "Kurchatov Institute")
An overview of Tier-1 operations during the beginning of LHC Run-2 will be presented. We will talk about three supported experiments, ALICE, ATLAS and LHCb: current status of resources and computing support, challenges, problems and solutions. Also we will give an overview of the wide-area networking situation and integration of our Tier-1 with regional Tier-2 centers.
Dr Iurii Sakharov (Dubna International University for Nature,Society and Man)
This report provides an insight into the transition of the Russian higher education to standard 3+, peculiarities of the bachelor programs according to standard 3 and the formation of educational programs of the new standard. Special attention is paid to the optimization of the network of high schools on the basis of consolidation, building a network of Russian universities able to be on the...
46. Hardware-Software Complex “Virtual Laboratory of Nuclear Fission” for LIS Experiment (Flerov Laboratory of Nuclear Reactions, JINR)
Ms Ksenia Klygina (JINR) , Ms Victoria Belaga (JINR) , Prof. Yury Panebrattsev (JINR)
One important aspect in the pedagogy of modern education is the integration of technological elements of modern science into the educational process. This integration has given rise to what has become to be referred to as blended learning. In this report we focus on the hardware-software complex “Virtual Laboratory of Nuclear Fission” as an example of incorporation of current scientific data...
Dr Tatiana Strizh (JINR)
An overview of the JINR Tier-1 centre for the CMS experiment at the LHC is given. A special emphasis is placed on the main tasks and services of the CMS Tier-1 at JINR. In February 2015 the JINR CMS Tier-1 resources were increased to the level that was outlined in JINR's rollout plan: CPU 2400 cores (28800 HEP-Spec06), 2.4 PB disks, and 5.0 PB tapes. The first results of Tier-1 operations...
Dr Elena Tikhonenko (JINR)
The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. Russia and Dubna Member States (RDMS) CMS collaboration was founded in the 1994 year. More than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) are involved in Russia and Dubna Member States (RDMS) CMS Collaboration. The RDMS CMS takes an...
Ms Ksenia Klygina (JINR)
These days there’s a lot of media material available on the internet for educators that include papers and lectures for a wide range courses and educational programs. But if one wishes to use some new interesting multimedia resources in a classroom, it takes a lot of time to find good quality pedagogical resources corresponding that might one’s own needs and requirements. The second problem is...
107. E-learning as a Technological Tool to Meet the Requirements of Professional Standards in Training of IT Specialists
Olga Tyatyushkina (Dubna Univeristy)
We discuss issues of updating educational program according to requirements of labor market and the professional standards of the IT industry. We suggest the technology of E-learning through open educational resource to provide the participation of employers in the development of educational content and the intensification of practical training.
Prof. Alexander SHARMAZANASHVILI (Georgian Technical University) , Mr Niko Tsutskiridze (Georgian Technical University)
Data_vs_MonteCarlo discrepancy is one of the most important field of investigation for ATLAS simulation studies. There are several reasons of above mentioned discrepancies but primary interest is falling on geometry studies and investigation of how geometry descriptions of detector in simulation adequately representing “as-built” descriptions. Shapes consistency and detalization is not...
71. Adaptive educational environment in the IT field of study reacting on changes in the labor market
Mr Yury Samoylenko (Dubna University)
The article describes modern approaches of creating educational environments, describes main technologies for their creating and development and gives examples of projects in this area, both in Russia and abroad. Identification and formalization of the needs of participants in the educational process were done and a concept of an adaptive educational environment in the IT field of study...
Ivan Bednyakov (JINR)
This report describes the actions that can be useful for system administrators to quickly enter or replacing hardware cluster. The report includes general knowledge and specific examples for better understanding. In particular, work with IPMI (Intelligent Platform Management Interface), remote configure Worker Node with SSH, work with DHCP. Method of copy and deployment Worker Node images...
Ms Victoriya Osipova (Tomsk Polytechnic University, Tomsk, Russia)
The traditional relational databases (aka RDBMS) having been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data...
Dr Alexander Karlov (JINR)
The growing demand for qualified IT experts raises serious challenges for the education and training of young professionals who would answer scientific, industrial and social problems of tomorrow. Virtualization has a great impact on education allowing to increase its efficiency, to cut costs and to expand student audience abstracting users from physical characteristics of computing resources....
Ms Maria Grigorieva (National Research Center “Kurchatov Institute”)
Scientific computing in a field of High Energy and Nuclear Physics (HENP) produces vast volumes of data. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, daily runs up to 1.5 M jobs and submit them using PanDA workload management system....
84. Virtual Computer Laboratory 2.0. 3D Graphics as Service. Methodological aspects of the use in research and education.
Nadezhda Tokareva (Dubna Univeristy)
The authors observe the practice of implementing the Virtual Computer Laboratory in Dubna International University for Nature, Society and Man. New generation of the virtual computer laboratory has introduced game-changing technology to make virtualization of professional 3D graphics applications easy to deliver and meet the performance expectations of students studying for...
Mr Artem Petrosyan (JINR)
PanDA (Production and Distributed Analysis System) is a workload management system, widely used for data processing at experiments on Large Hadron Collider (LHC) and others. COMPASS is a high-energy physics experiment at the Super Proton Synchrotron (SPS). Data processing for COMPASS historically runs locally at CERN, on lxbatch, the data itself stored in CASTOR. In 2014 an idea to start...
Dr Andrea Favareto (University and INFN Genova (Italy))
The ATLAS experiment collects billions of events per year of data-taking, and processes them to make them available for physics analysis in several different formats. An even larger amount of events is in addition simulated according to physics and detector models and then reconstructed and analysed to be compared to real events. The EventIndex is a catalogue of all events in each production...
Mr Ignacio Barrientos Arias (CERN)
The CERN IT Department provides configuration management services to LHC experiments and to the department itself for more than 17,000 physical and virtual machines in two data centres. The services are based on open-source technologies such as Puppet and Foreman. The presentation will give an overview of the current deployment, the issues observed during the last years, the solutions adopted,...
Mr Serob Balyan (Saint-Petersburg State University) , Mr Suren Abrahamyan (Saint-Petersburg State University)
Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerate development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence,...
Valeriy Parubets (National Research Tomsk Polytechnic University)
The work reviews a development of mathematical solution for modeling heterogeneous distributed data storages. There is a review of different approaches of modeling (Monte-Carlo, agent-based modeling). Performance analysis of systems based on commercial solutions of Oracle and freeware solutions (Cassandra, Hadoop) is provided. It's assumed that developed tool will help optimize data...
Dr Alexander Khilchenko (Budker Institute, Novosibirsk) , Dr Igor Semenov (Project Center ITER)
ITER (International Thermonuclear Experimental Reactor) is one of the most complex international mega project (Cadarasche, France). It integrates more than 180 technical Sub Systems (Vacuum, Cooling, Power Suppliers, Cryogenics, Plasma Diagnostics, etc), procured from different Participant Teams through their 7 Domestic Agencies (China, EU, India, Japan, Korea, RF, US). COntrol, Data...