Dr
Markus schulz
(on behalf of WLCG)
25/09/2017, 11:00
Plenary
The LHC science program has utilized WLCG, a globally federated computing infrastructure, for the last 10 years to enable its ~10k scientists to publish more than 1000 physics publications in peer reviewed journals. This infrastructure has grown to provide ~750k cores, 400 PB disk space, 600 PB of archival storage, as well as high capacity networks to connect all of these.
Taking 2016 as a...
Dr
Kenneth Herner
(Fermi National Accelerator Laboratory)
25/09/2017, 12:30
The FabrIc for Frontier Experiments (FIFE) project is a
major initiative within the Fermilab Scientific Computing Division
designed to steer the computing model for non-LHC experiments at Fermilab. The FIFE project enables close collaboration between experimenters and
computing professionals to serve high-energy physics experiments of differing scope and physics area of study. The project...
Dr
Weidong Li
(IHEP, Beijing)
25/09/2017, 14:30
The distributed computing system at Institute of High Energy Physics (IHEP), Chinese Academy of Sciences, was firstly built based on DIRAC in 2013 and put into production in 2014. This presentation will introduce the development and latest status of this system: the DIRAC-based WMS was extended to support multi-VO scheduling based on VOMS; the general-purpose task submission and management...
Dr
Jack Wells
(Oak Ridge National Laboratory)
25/09/2017, 15:00
Plenary
Over its many-decade history, nuclear and particle physics research has been a driver for advances in high-performance computing (HPC) and has come to view HPC as an essential scientific capability. Indeed, the dawn of the twenty-first century has witnessed the widespread adoption of HPC as an essential tool in the modeling and simulation of complex scientific phenomena. And today, in 2017,...
Dr
Sergey Sidorchuk
(FLNR JINR)
25/09/2017, 15:30
The development of the experimental base of the Flerov Laboratory (JINR, Dubna) assumed for the forthcoming 7-year period includes two principal directions. The first one implies the study of physical and chemical properties of nuclei in the vicinity of the so called “Stability Island”. This activity will be developed mainly on the base of the Super Heavy Elements (SHE) Factory. The factory,...
Mr
Mikhail Borodin
(The University of Iowa (US))
25/09/2017, 16:20
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS-specific workflows, across more than a hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based upon many criteria, such as input and output size, memory requirements and...
Mr
Yannick LEGRÉ
(EGI Foundation)
26/09/2017, 09:00
Plenary
Maarten Litmaath
(CERN)
26/09/2017, 09:30
Plenary
One of the goals of the WLCG Operations Coordination activities is to help simplify what the majority of the WLCG sites, i.e. the smaller ones, need to accomplish to be able to contribute resources in a useful manner, i.e. with large benefits compared to efforts invested. This contribution describes different areas of activities which aim to allow sites to be run with minimal oversight and...
Dr
markus schulz
(CERN)
26/09/2017, 10:00
Plenary
The T0 at CERN operates large storage and computing farms for the LHC community. For economic reasons the hardware of the disk servers is, with respect to CPU and memory, virtually identical to the one used in the batch nodes. Monitoring data showed that these nodes are not running anywhere close to their computational limit. Proof of concept tests have been conducted by Andrey Kiryanov...
Dirk Duellmann
(CERN)
26/09/2017, 10:20
Dirk Duellmann
(CERN)
26/09/2017, 10:50
Plenary
CERN provides a significant part of the storage and cpu resources used for LHC analysis and is, similar to many other WLCG sites, preparing for a significant requirement increase in LHC run 3.
In this context, an analysis working group has been formed at CERN IT with the goal to enhance science throughput by increasing the efficiency of storage and cpu services via a systematic statistical...
Ms
Julia Andreeva
(CERN)
26/09/2017, 11:50
Plenary
The WLCG infrastructure combines computing resources of more than 170 centers in 42 countries all over the world. Smooth operations of so huge and heterogeneous infrastructure is a complicated task which is performed by a distributed team. Constant growth of the amount of the computing resources and technology evolution which introduces new types of the resources like HPC and commercial clouds...
Mr
Alexey Anisenkov
(BINP)
26/09/2017, 12:20
Plenary
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centers affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centers all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the LHC...
Mr
Vasilii Shvetcov
(FLNP)
26/09/2017, 14:30
Triggering, Data Acquisition, Control Systems
Sectional
Present trends towards increasing the number of detector channels and the volumes of registered and accumulated data in real time in experiments on IBR-2 reactor spectrometers in FLNP require increasing the bandwidth of data acquisition systems.
The paper considers modernization of the data acquisition system based on the MPD and De-Li-DAQ-2D blocks earlier developed in FLNP and being widely...
Mr
Alexander Avrorin
(INR RAS)
26/09/2017, 14:45
Baikal-GVD is a gigaton volume underwater neutrino detector located in Lake Baikal. Compared to NT-200+, a previous iteration of the Baikal neurino observatory, Baikal-GVD represents a leap in complexity and raw data output. Therefore, a new, comprehensive data management infrastructure for transfer and analysis of the experimental data has been established. It includes a two-tier data...
Mr
Alexey Voinov
(FLNR JINR)
26/09/2017, 15:00
The detection system of the Dubna gas-filled recoil separator (DGFRS) aimed at the studying of the SHE nuclei and their decay properties has been modernized during last few years. The new set of multistrips double-sided silicon detectors (DSSD) in focal plane of DGFRS is applied now instead of the old array of 12-strips position-sensitive Si detectors. The total amount of measuring...
Ms
Svetlana Murashkevich
(RUSSIA, JINR)
26/09/2017, 15:00
Triggering, Data Acquisition, Control Systems
Sectional
In this work software implementation of USB3.0 stack protocols for operating data acquisition units of the IBR-2 spectrometric system with an upgraded communication adapter, is considered.
The data acquisition system on De-Li-DAQ-2D and MPD blocks developed earlier
in FLNP is widely used at present on neutron spectrometers. To connect the modules to the computer, an FLINK fiber optic...
Mr
Leo Schlattauer
(Palacky University Olomouc, Czech Republic, JINR Dubna)
26/09/2017, 15:15
New particle position determination modules for Double side silicon strip detector were designed that allow to simplify existing multi-channel measurement system in search for the rare events of super heavy elements formation at DGFRS. The main principle is to search position correlated sequences of implanted SHE and followed alpha-particles/or SF events above predefined threshold Energy level...
Mr
Vladimir Gennadyevich Elkin
(JINR VBLHEP)
26/09/2017, 15:15
Triggering, Data Acquisition, Control Systems
Sectional
The report describes Tango module for WebSocket connection. WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection. This module allows to carry out both monitoring and management of tango devices. The module also has several modes of operation. Depending on the selected mode, you can control both one and any number of required...
Mr
Vitali Aleinikov
(JINR)
26/09/2017, 15:30
Triggering, Data Acquisition, Control Systems
Sectional
New isochronous cyclotron DC-280 is being created at the FLNR JINR. The software application uses LabVIEW and supports pneumatic step movement, data acquisition, and magnet power supply control. A complete map of 360 degrees is obtained in approximately 14 hours, measuring 148750 field values with a spacing of 10 mm in radius every degree. This paper describes software part of the magnetic...
Dr
Maxim Karetnikov
(VNIIA)
26/09/2017, 15:30
At the T(d,n)He4 reaction each 14 MeV neutron is accompanied by a 3.5 MeV alpha- particle emitted in the opposite direction. A position- and time-sensitive alpha-detector measures time and coordinates of the associated alpha particle which allows determining time and direction (tags) of neutron escape. The tagged neutron technology is based on a time and spatial selection of events that...
Mr
Vladimir Drozdov
(FLNP JINR)
26/09/2017, 15:45
The report describes the electronics and software of data acquisition systems for thermal neutron detectors [1], which are currently used on the spectrometers of the IBR-2 reactor at JINR. The experience gained during the operation of these systems has been summarized, the results of the performance analysis of the data acquisition systems developed in the FLNP for position-sensitive neutron...
Mr
Ilya Shirikov
(JINR)
26/09/2017, 15:45
Triggering, Data Acquisition, Control Systems
Sectional
The paper deals with creation of the concept and the first prototype of the new signal synchronization system of the Nuclotron accelerator complex.
The text describes a new scheme for the collection and distribution of signals needed to synchronize many devices of the accelerator complex.
Much attention is given to problems of the development of electronic equipment. The following modules...
Mr
Dmitrii Monakhov
(JINR)
26/09/2017, 16:30
A few improvements have been made in order to enhance the resolution of the Q measurement system such as development of the additional NI FlexRIO digitizer module with two 18-Bit ADC AD7960 and TDC-GP22 for precision beam revolution frequency measurement. The new amplification system for picking up signals was developed using diode detection technique, analog filtering and real-time gain...
Mr
Vasily Andreev
(VBLHEP JINR)
26/09/2017, 16:30
The superconducting synchrotron Nuclotron is the base of a new accelerator complex NICA designed at the LHEP, JINR. It is very important to monitor dynamics of its internal beam intensity during an acceleration cycle for proper tuning and functioning of the setup.
The new parametric current transformer of Bergoz Instrumentation with frequency response DС to 10 kHz is used for measuring the...
Dr
Lubomir Dimitrov
(INRNE - BAS)
26/09/2017, 16:45
The higher energy and luminosity of future (HL) LHC imposed the development and testing of new type high-rate detectors as GEM (Gas Electron Multiplier). A monitoring system designed for measurement of the dose absorbed by the GEM detectors during the tests has been recently described [1]. The system uses a basic detector unit, called RADMON. There are in each unit two types of sensors:...
Mr
Dmitriy Ponkin
(LHEP JINR)
26/09/2017, 17:00
During the work on the creation of a new Electron String Ion Source (ESIS) for the NICA/MPD project several electronic modules were created.
Modules includes pulsed HV (+3 kV) potential barriers formation modules used to hold ions, HV (+3 kV) ion extraction module and several secondary modules.
Modules development process and test results are discribed.
Hristo Nazlev
(JINR)
26/09/2017, 17:15
The report describes requirements for the Booster injection system, its operation algorithm and realization details. The control system is based on National Instruments CompactRIO equipment and realizes injection devices control, synchronization and monitoring. The results of high voltage tests are presented.
Dr
Patrick Fuhrmann
(DESY)
27/09/2017, 09:00
When preparing the Data Management Plan for larger scientific endeavours, PI’s have to balance between the most appropriate qualities of storage space along the line of the planned data lifecycle, it’s price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as...
Mr
Nikita Belyaev
(NRC "Kurchatov Institute")
27/09/2017, 09:30
The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of...
Dr
Andrei Tsaregorodtsev
(CPPM-IN2P3-CNRS)
27/09/2017, 10:00
Dr
Mohammad Al-Turany
(GSI/CERN)
27/09/2017, 10:30
ALFA is a message queue based framework for online/offline reconstruction. The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of this framework. Each process in ALFA assumes limited communication and reliance on other processes. Moreover, it does not dictate any application protocols but supports different serialization standards...
Mr
Levente Hajdu
(BNL)
27/09/2017, 11:30
Plenary
STAR's RHIC computing facility provides over 15K dedicated slots for
data reconstruction. However this number of slots is not always
sufficient to satisfy an ambitious and data challenging Physics program
and harvesting resources from outside facilities is paramount to
scientific success. However, constraints of remote sites (CPU time
limit) do not always always provide the...
Dr
Oleg Rogachevskiy
(JINR)
27/09/2017, 12:00
The study of the heavy ion collisions is of the great interest in high
energy physics due to expected phase transition from the nucleons to the
quark gluon plasma. But to have a full picture of the effect there is a
lack of experimental data at low energy region for the
nuclei-nucleus collisions. The goal of the NICA project at JINR is to
cover the collision energy range from 2...
Prof.
Dario Barberis
(University and INFN Genova (Italy))
28/09/2017, 09:00
Structured data storage technologies evolve very rapidly in the IT world. LHC experiments, and ATLAS in particular, try to select and use these technologies balancing the performance for a given set of use cases with the availability, ease of use and of getting support, and stability of the product. We definitely and definitively moved from the “one fits all” (or “all has to fit into one”)...
Mr
Robert Wolff
(CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
28/09/2017, 09:30
Plenary
The upgrade of the Large Hadron Collider (LHC) scheduled for the shut-down period of 2018-2019 (Phase-I upgrade), will increase the instantaneous luminosity to about three times the design value. Since the current ATLAS trigger system does not allow a corresponding increase of the trigger rate, an improvement of the trigger system is required.
The new trigger signals from the ATLAS Liquid...
Francesco Tartarelli
(Università degli Studi e INFN Milano)
28/09/2017, 10:00
This presentation will show the status of the upgrade projects of the ATLAS calorimeter system for the high luminosity phase of the LHC (HL-LHC). For the HL-LHC, the instantaneous luminosity is expected to increase up to L ≃ 7.5 × 10^{34} cm^{−2} s^{−1} and the average pile-up up to 200 interactions per bunch crossing.
The Liquid Argon (LAr) calorimeter electronics will need to be replaced...
Mr
Mikel Eukeni Pozo Astigarraga
(CERN)
28/09/2017, 10:30
The LHC has been providing proton-proton collisions with record intensity and energy since the start of Run 2 in 2015. In the ATLAS experiment the Data Acquisition is responsible for the transport and storage of the more complex event data at higher rates that the new collision environment implies. Data from events selected by the first level hardware trigger are subject to further filtration...
Savanna Shaw
(University of Manchester)
28/09/2017, 11:30
Triggering, Data Acquisition, Control Systems
Sectional
The ATLAS trigger has been used very successfully for the online event
selection during the first part of the second LHC run (Run-2) in 2015/16
at a centre-of-mass energy of 13 TeV. The trigger system is composed of
a hardware Level-1 trigger and a software-based high-level trigger; it
reduces the event rate from the bunch-crossing rate of 40 MHz to an
average recording rate of about 1...
Dr
Alexei Klimentov
(Brookhaven National Lab)
28/09/2017, 11:45
Distributed Computing. GRID & Cloud Computing
Sectional
The Production and Distributed Analysis (PanDA) system was designed to meet the requirements for a workload management system (WMS) capable to operate at the LHC data processing scale. It has been used in the ATLAS experiment since 2005 and is now being part of the BigPanDA project expanding into a meta application, providing transparency of the data processing and workflow management for High...
Callum Kilby
(on behalf of the ATLAS collaboration)
28/09/2017, 11:45
Triggering, Data Acquisition, Control Systems
Sectional
The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without...
Mr
Artem Petrosyan
(JINR)
28/09/2017, 12:00
LHC Computing Grid was a pioneer integration effort, managed to unite computing and storage resources all over the world, thus made them available to experiments on Large Hadron Collider. During decade of LHC computing, Grid software has learned to effectively utilize different types of computing resources, such as classic computing clusters, clouds and hyper power computers. And while the...
Dr
Marcus Morgenstern
(on behalf of the ATLAS collaboration)
28/09/2017, 12:00
Triggering, Data Acquisition, Control Systems
Sectional
Events containing muons in the final state are an important signature for many analyses being carried out at the Large Hadron Collider (LHC), including both standard model measurements and searches for new physics. To be able to study such events, it is required to have an efficient and well-understood muon trigger. The ATLAS muon trigger consists of a hardware based system (Level 1), as well...
Mr
Danila Oleynik
(JINR LIT)
28/09/2017, 12:15
Distributed Computing. GRID & Cloud Computing
Sectional
The Production and Distributed Analysis system (PanDA) has been used for workload management in the ATLAS Experiment for over a decade. It uses pilots to retrieve jobs from the PanDA server and execute them on worker nodes. While PanDA has been mostly used on Worldwide LHC Computing Grid (WLCG) resources for production operations, R&D work has been ongoing on cloud and HPC resources for many...
Emma Torro
(Valencia U., IFIC)
28/09/2017, 12:15
Triggering, Data Acquisition, Control Systems
Sectional
The ATLAS experiment aims at recording about 1 kHz of physics
collisions, starting with an LHC design bunch crossing rate of 40
MHz. To reduce the large background rate while maintaining a high
selection efficiency for rare physics events (such as beyond the
Standard Model physics), a two-level trigger system is used.
Events are selected based on physics signatures such as the...
Mr
Fernando Barreiro Megino
(University of Texas at Arlington)
28/09/2017, 12:30
Distributed Computing. GRID & Cloud Computing
Sectional
PanDA is the workflow management system of the ATLAS experiment at the LHC and is responsible for generating, brokering and monitoring up to two million jobs per day across 150 computing centers in the Worldwide LHC Computing Grid. The PanDA core consists of several components deployed centrally on around 20 servers. The daily log volume is around 400GB per day. In certain cases,...
Dr
Konstantin Gertsenberger
(JINR)
28/09/2017, 12:30
Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.)
Sectional
One of the problems to be solved in high energy physics experiments on particle collisions and fixed target experiments is online visual presentation of the events during the experiment run. The report describes the implementation of this task, so called Online Event Display, for the current BM@N experiment and the future experiment MPD (Multi-Purpose Detector) at the Nuclotron-based Ion...
Mr
Petr Vokac
(Institute of Physics of the Czech Academy of Sciences)
28/09/2017, 12:45
Today's physics experiments strongly rely on computing not only during
data taking periods, but huge amount of computing resources is necessary
later for offline data analysis to obtain precise physics measurements
out of the enormous amount of recorded raw data and Monte-Carlo
simulations. Large collaborations with members from many countries are
essential for successful research on...
Mr
Vadim Babkin
(Joint Institute for Nuclear Research)
28/09/2017, 12:45
The multipurpose MPD detector is the main tool for studying the properties of hot and dense baryonic matter formed in collisions of heavy ions at the NICA accelerator complex. The sufficiently high luminosity of the collider, the complexity and diversity of the physical tasks make high demands on the performance of detectors and service systems of the MPD. The report gives a brief overview of...
Mr
Nikita Balashov
(JINR)
28/09/2017, 14:30
One of the possible ways to speed up scientific research projects which JINR and organizations from its Member States participate in is to join computational resources. It can be done in a few ways one of which is to build distributed cloud infrastructures integrating local private clouds of JINR and organizations from its Member States. To implement such scenario a cloud bursting based...
Mr
Alexandr Dmitriev
(LHEP JINR)
28/09/2017, 14:30
This report is devoted to the status of the test stand of the TOF MPD system. The stand is planned to be used to carry out methodical research and mass testing of detectors for the MPD experiment at the NICA collider. The setup is described in detail.
The investigation has been performed at the Veksler and Baldin Laboratory of High Energy Physics, JINR.
Dr
Vladimir Yurevich
(JINR)
28/09/2017, 14:45
L0 trigger system plays a crucial role in the fast and effective selection of AA- collisions in both fixed target and collider experiments. The concepts of an active target area of the BM@N/Nuclotron experiment and a fast vertex-trigger system developed for the MPD experiment at NICA collider are considered. The requirements to trigger detectors and electronics as well some test results are discussed.
Mr
Igor Pelevanyuk
(JINR)
28/09/2017, 14:45
Multifunctional Information and Computing Complex(MICC) is one of the basic scientific facilities of Joint Institute for Nuclear Research. It provides a 24×7 implementation of a vast range of competitive research conducted at JINR at a global level. MICC consists of for major components: grid-infrastructure, central computing complex, JINR private cloud and high performance heterogeneous...
Oleg Samoylov
(JINR)
28/09/2017, 15:00
Distributed Computing. GRID & Cloud Computing
Sectional
NOvA is a large-scale neutrino experiment JINR takes part in in many directions including those connected to information technologies usage. A cloud resource was provided by the JINR computing center for the NOvA experiment within which a pool of virtual machines was deployed to provide local JINR users a way of using them in interactive mode giving the users ability to use this service for...
Mr
Stepan Vereschagin
(JINR)
28/09/2017, 15:00
The TPC barrel is placed in the middle of a Multi-Purpose Detector and provides tracing and identifying of charged particles in the pseudorapidity range │η│≤ 1.2.
Tracks in the TPC are registered by 24 readout chambers placed at both end-caps of the sensitive volume of the barrel. The readout system of one chamber consists of the front-end card (FEC) set and a readout control unit (RCU)....
Mr
Vitaly Shutov
(Borisovich)
28/09/2017, 15:15
Triggering, Data Acquisition, Control Systems
Sectional
The Multi-Purpose Detector (MPD) is a 4π spectrometer capable of detecting of charged hadrons, electrons and photons in heavy-ion collisions at high luminosity in the energy range of the NICA collider. Among many others one of the crucial tasks necessary for successful operation of such a complex apparatus is providing an adequate monitoring of operational parameters and convenient control of...
Prof.
Vladimir Dimitrov
(University of Sofia)
28/09/2017, 15:30
JINR develops a cloud based on OpenNebula that is opened for integration with the clouds from the member states. The paper presents state of the 3 years project that aims to create a backbone of that cloud in Bulgaria. University of Sofia and INRNE participate in that initiative. This is a target project funded by JINR based on the research plan of the institute.
Mr
Georgy Sedykh
(JINR)
28/09/2017, 15:30
Triggering, Data Acquisition, Control Systems
Sectional
Control system of the superconducting magnets cryogenic test bench has been designed in Tango Controls format. It includes the Thermometry system and the Satellite refrigerators control system. The report describes hardware and software modules for data acquisition and management, archiving system, configuration system, access control system, web service and web client applications.
Mr
Alexander Bychkov
(LHEP)
28/09/2017, 15:45
Triggering, Data Acquisition, Control Systems
Sectional
The software, which has been used for the magnetic measurements test bench for superconducting magnets of NICA and FAIR projects is described. Main measurement program that is used in order to collect measured data, and is responsible for sensor position as well as software for processing measured data are presented. Filtering and smoothing algorithm based on wavelets and splines that were...
Mr
Ruslan Smeliansky
(ARCCN)
28/09/2017, 15:45
Modern research cloud infrastructures purposed to help researcher to prepare virtual environment that satisfy various specific requirements. The focus could be set on a network topology and providing different network functions (NAT, Firewall, IDS, vSwitch etc.) in order to provide testbed for network research, or a network device testing. Another focus could be set on compute resources...
Mr
Nicolai Iliuha
(RENAM)
28/09/2017, 16:20
Distributed Computing. GRID & Cloud Computing
Sectional
In the paper presented results of works focused on building of heterogeneous Cloud base scientific computing infrastructure. Main purpose of infrastructure is to provide for researchers a possibility to access ”on demand” a wide range of different types of resources, that can be physically located in local, federated and GEANT offering clouds. These resources include pure and customized...
Mr
Dmitry Egorov
(JINR)
28/09/2017, 16:20
Triggering, Data Acquisition, Control Systems
Sectional
Big modern physics experiments represent a collaboration of workgroups and require wide variety of different electronic equipment. Besides trigger electronics or Data acquisition system (DAQ), there is a hardware that is not time-critical, and can be run at a low priority. Slow Control system are used for setup and monitoring such hardware.
Slow Control systems in a typical experiment are...
Mr
Victor Rogov
(JINR)
28/09/2017, 16:35
Triggering, Data Acquisition, Control Systems
Sectional
The BM@N facility is a fixed target experiment based on heavy ion beams of the Nuclotron-M accelerator. The aim of the BM@N is to study nucleus – nucleus collisions at energies up to 4.5 GeV per nucleon. Our group is responsible to develop triggers system for this experiment.
The described trigger system has been developed at LHEP/JINR for trigger generation in the BM@N experiments. The...
Mr
andrey shevel
(PNPI, ITMO)
28/09/2017, 16:40
Distributed Computing. GRID & Cloud Computing
Sectional
University ITMO (ifmo.ru) is developing the cloud of geographically distributed data centers under Openstack. The term “geographically distributed” in our proposal means data centers (DC) located in different places far from each other by hundred or thousand kilometers. Authors follow the conception of “dark” DC, i.e the DC has to perform normal operation without permanent maintainers even...
Mr
Ilnur Gabdrakhmanov
(VBLHEP)
28/09/2017, 16:50
Triggering, Data Acquisition, Control Systems
Sectional
The BM@N experiment is the crucial stage in the technical development
of the NICA project. In order to effectively maintain experiment it is extremely important to have uniform for all detectors, fast and convenient tool to monitor experimental facility.
The system implements decoding of the incoming raw data on the fly,
preprocessing and visualization on the webpage. Users can monitor...
Dr
Jingyan Shi
(INSTITUTE OF HIGH ENERGY PHYSICS, Chinese Academy of Science)
28/09/2017, 16:55
Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.)
Sectional
HTCondor, a scheduler focusing on high throughput computing has been more and more popular in high energy physics computing. The HTCondor cluster with more than 10,000 cpu cores running at computing center, institute of high energy physics in China, supports several HEP experiments, such as JUNO, BES, Atlas, Cms etc. The work nodes owned by the experiments are managed by HTCondor. A sharing...
Prof.
Gennady Ososkov
(Joint Institute for Nuclear Research)
29/09/2017, 10:00
A fundamental problem of data processing for high energy and nuclear physics (HENP) experiments is the event reconstruction. The main part of it is finding tracks among a great number of so-called hits produced on sequential co-ordinate planes of tracking detectors. The track recognition problem consists in joining these hits into clusters, each of them joins all hits belonging to the same...
Mr
Mikhail Titov
(National Research Centre «Kurchatov Institute»)
29/09/2017, 10:20
The workflow management process should be under the control of the certain service that is able to forecast the processing time dynamically according to the status of the processing environment and workflow itself, and to react immediately on any abnormal behaviour of the execution process. Such situational awareness analytic service would provide the possibility to monitor the execution...
Mr
Petr Jancik
(JINR; VSB - Technical University of Ostrava)
29/09/2017, 10:40
The IT in the nuclear research has been focused mainly on mathematical modelling of the nuclear phenomena and on big data analyses. The applied nuclear sciences used for the environmental research brings in a different set of problems where information technologies may significantly improve the research. The ICP Vegetation is an international research program investigating the impacts of air...
Nikolai Mester
(Intel)
29/09/2017, 11:20
Ms
Maria Grigorieva
(NRC KI)
29/09/2017, 11:35
Non-relational Databases and Heterogeneous Repositories
Sectional
Modern High Energy and Nuclear Physics experiments generate vast volumes of scientific data and metadata, describing scientific goals, the data provenance, conditions of the research environment, and other experiment-specific information. Data Knowledge Base (DKB) R&D project has been initially started in 2016 as a joint project of National Research Center “Kurchatov Institute” and Tomsk...
Dr
Dmitry Podgainy
(JINR), Dr
Oksana Streltsova
(JINR)
29/09/2017, 11:40
We apply several machine-learning (ML) algorithms for identification and separation of the neutron and gamma-ray signals coming from the DEMON (DEtecteur MOdulaire de Neutrons) detector. The ML-predictions have been contrasted with the results obtained within a standard method based on an integral-area scheme. In the situations where the standard method fails a properly trained ML-algorithm...
Mrs
Marina Golosova
(National Research Center "Kurchatov Institute")
29/09/2017, 11:50
Non-relational Databases and Heterogeneous Repositories
Sectional
In modern times many large projects, sooner or later, have to face the problem of how to store, manage and access huge volumes of semi-structured and loosely connected data, namely project metadata -- information, required for monitoring and management of the project itself and its internal processes.
The structure of the metadata evolves all the time to meet the needs of the monitoring tasks...
Prof.
Viacheslav Samarin
(Joint Institute for Nuclear Research, Flerov Laboratory of Nuclear Reactions)
29/09/2017, 11:55
Computations with Hybrid Systems (CPU, GPU, coprocessors)
Sectional
The modern parallel computing solutions were used to speed up the calculations by Feynman’s continual integrals method. The algorithm was implemented in C++ programming language. Calculations using NVIDIA CUDA technology were performed on the NVIDIA Tesla K40 accelerator installed within the heterogeneous cluster of the Laboratory of Information Technologies, Joint Institute for Nuclear...
Irina Filozova
(JINR)
29/09/2017, 12:05
Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.)
Sectional
In this paper we present the current state of developments of the Geometry DB (Geometry Database) for the CBM experiment [1]. At the current moment, the CBM collaboration moves from the stage of prototypes research and tests to the detectors and their components production. A high level control for the manufacturing process is required because of the complexity and high price of the detector...
Sergey Belov
(Joint Institute for Nuclear Research)
29/09/2017, 12:10
Interaction of labour market and educational system is a complex process, with many parties involved (government, universities, employers, individuals, etc.). Both horizontal and vertical mismatch between skills and qualifications from one side and market’s requirements from another are still widely observed in both developing and developed countries.
To discover both qualitative and...
Ms
Victoria Tokareva
(JINR)
29/09/2017, 12:25
Computations with Hybrid Systems (CPU, GPU, coprocessors)
Sectional
The partial wave analysis at the BES-III experiment is being done event-by-event using the maximum likelihood estimation, with the typical statistics of the order of 10 billion J/ψ events per year, resulting in huge computation times. On the other hand, the event-by-event analysis can be naturally parallelized.
We developed the parallel cross-platform software architecture that can run...
Mrs
Oksana Kreider
(Крейдер Оксана)
29/09/2017, 15:00
The report focuses on new trends in education. The system of open education is the path to a single world educational space, which offers unique opportunities not only for new educational initiatives on a global scale, but also for modernization of existing educational institutions.
Mrs
Evgenia Cheremisina
(Черемисина Евгения)
29/09/2017, 15:15
The report focuses on new trends in education in conditions of transition to the digital economy. The program of development of digital economy in Russia requires new approaches to training and the use of modern digital technologies. The training strategy in modern conditions will be presented on the example of the State University University «Dubna».
Mrs
Nadezhda Tokareva
(State Dubna Univeristy)
29/09/2017, 15:30
When training highly skilled IT professionals, it is an important challenge for the university to teach professional competencies to graduates that they will be able to use to successfully solve a broad range of substantive problems that arise at all stages of the lifecycle of corporate information systems. Such information systems in practice, as a rule, are used for enterprise management,...
Victor Pilyugin
(National Research Nuclear University "MEPhI", Moscow, Russian Federation)
29/09/2017, 15:45
In this paper we present the results of scientific visualization research as a joint project of the NRNU MEPhI (Moscow, Russia) and the National Centre for Computer Animation, Bournemouth University (Bournemouth, United Kingdom). We consider scientific visualization as a modern computer-based method of data analysis. The essence of this method is to establish the correspondence between the...
Ms
Victoria Belaga
(JINR), Prof.
Yury Panebrattsev
(JINR)
29/09/2017, 16:30
In this report, we would like to present a software and hardware complex used to training of university students for their further work in real physical experiments. Our educational tool “Virtual Laboratory of Nuclear Fission” consists of several complementary components:
A) General view:
– Key ideas in nuclear physics and nuclear structure,
– Basic theoretical models of nuclei,
–...
Prof.
Yury Panebrattsev
(JINR)
29/09/2017, 17:00
Educational support of the megaproject NICA is aimed at attracting public attention (school and university students and generally interested audience) to the scientific achievements of JINR and also training specialists to work at the accelerating complex NICA in the mid-term and long-term perspective.
It is also necessary to include scientific and applied results obtained at NICA in the...
Dr
Iurii Sakharov
(Dubna International University for Nature,Society and Man)
29/09/2017, 17:15
The report given investigates the influence of Bologna Process on Russian System of Higher Professional Education. Changes are presented of both the format and content of Higher Education, made due to the European Educational format. Analysis of bachalors and masters students training is done at the State Dubna University for «Nica» Program.
The Key-point of educational training Program...
Dr
Alexandre Karlov
(JINR)
29/09/2017, 17:30
The Internet of Things (IoT) is developing at a tremendous rate. It is a combination of devices connected via the Internet and other networks, which are capable of receiving information from the outside world, analyzing it and, if necessary, managing external devices as well as provide information for decision-making. The goal is to create a more comfortable, safer and more efficient...
Prof.
Vladimir Dimitrov
(University of Sofia)
29/09/2017, 17:45
Sectional
Research and investigations on computer security problems show that the most malicious problem is the information disclosure. Today this problem is enormous in the context of the new cloud services.
The paper is an overview of the main computer security components: attacks, vulnerabilities and weaknesses with a focus on the last ones. An approach to information disclosure weaknesses...
Mr
Yury Akatkin
(Plekhanov Russian University of Economics)
Advanced Technologies for the High-Intensity Domains of Science and Business Applications
Sectional
Information sharing has become the key enabler for cross-agency interaction. Heterogeneous environment is inherent for the public sector and requires the application of integration methods, which will guarantee the achievement of unambiguous meaningful interpretation of data. The paper gives a brief review of international approaches to the achievement of semantic interoperability: built on...
Mr
Vladimir Mossolov
(INP BSU)
Distributed Computing. GRID & Cloud Computing
Sectional
Status of the INP BSU Tier 3 site BY-NCPHEP presented. The experience of operation, efficience, flexibility, reliability and versatility of the new cloud based structure is discussed.
Dr
Happy Sithole
(Centre for High Performance Computing, South Africa)
Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.)
Plenary
Large Scale Science projects normally require massive facilities that are funded through multi-national agreements. The computational requirements for these projects are complex, as the computing technologies have to meet multiple user requirements, and in a scale not yet realised by the current technology trends. The Square Kilometre Array (SKA), CERN etc.. are some examples of the projects,...
Prof.
Olga Tyatyushkina
(Dubna State University. Institute of system analysis and management)
Innovative IT Education
Sectional
The report considers the possibility to create a fundamentally new scientifically grounded platform for the educational process in intelligent robotics based on a new kind of intelligent self-organizing robot training device that takes into account the results of IT design development on the basis of computational intelligence toolkit and new types of mechatronics in the educational process....
Prof.
Sergey Ulyanov
(1Institute of System Analysis and Control, Dubna State University)
Sectional
Quantum relativistic mechanics, quantum thermodynamics and quantum relativistic information theory laws are the background of quantum relativistic informatics. Quantum computing, quantum programming, and quantum algorithm theories are oriented on simulation of quantum relativistic (open) dynamic systems using future quantum computer (Feymann & Manin). Unconventional computational intelligence...
Dr
Sergei Afanasiev
(JINR)
Detector & Nuclear Electronics
Sectional
It is reported on the study of radiation resistance of silicon photomultipliers (sipm) produced by HAMAMATSU. SiPM was irradiated in neutron fluxes of the reactor IBR-2 of JINR. The tested SiPM received fluence from 10^12 up to 2x10^14 of neutrons /cm2. Irradiated detectors investigated using a radioactive source and laser flashes at a temperature of -30C. The measurements showed that the...