SCIENCE BRINGS NATIONS TOGETHER
<center><span style="font-family: verdana; font-size: 20px; color: #275c86;">Montenegro, Budva, Becici, 28 september - 02 october 2015</span></center>

Europe/Podgorica
Budva, Becici, Hotel Splendid, Conference Hall

Budva, Becici, Hotel Splendid, Conference Hall

Description
On 28 September – 02 October, 2015, Montenegro (Budva), will host the regular JINR XXV Symposium on Nuclear Electronics and Computing - NEC'2015. The symposia have been held since 1963. This year Symposium is dedicated to the 60'th anniversary of JINR.

For the eighth time the organizers of the Symposium are JINR and CERN. The Symposium attendees will be leading specialists in the field of advanced computing and network technologies, distributed computing as well as GRID and cloud computing and nuclear electronics.

All previous forums of this series were highly appreciated at their true value by the leading specialists and companies involved.

The organizers of the NEC symposia traditionally paid particular attention to young scientists and specialists. The previous NEC conferences attracted an impressive number of such attendees which reached 35% of the total number of participants.

In the year of 2011 and 2013, within the scope of the symposium, organized were students’ schools on advanced information technologies, each being attended by almost 40 students from different countries. In 2015 the tradition is expected to be continued.
 
Chairpersons

Vladimir Korenkov, JINR
Ian Bird, CERN
Participants
  • Aleksey Kuznetsov
  • Aleksey Novoselov
  • Alexander Ayriyan
  • Alexander Bogdanov
  • Alexander Bulatov
  • Alexander Degtyarev
  • Alexander Paramonov
  • Alexandr Baranov
  • Alexandre Karlov
  • Alexei Klimentov
  • Alexey Voinov
  • Anastasia Stremoukhova
  • Andrea Favareto
  • Andreas-Joachim Peters
  • Andrei Tsaregorodtsev
  • Andrey Baginyan
  • Andrey Dolbilov
  • Andrey Nechaevskiy
  • Andrey Terletskiy
  • Andrey Yudin
  • Andrey Zarochentsev
  • Anton Churin
  • Artem Petrosyan
  • Charalampos Kouzinopoulos
  • Danila Oleynik
  • Dario Barberis
  • Dirk Duellmann
  • Dmitrii Monakhov
  • Dmitriy Ponkin
  • DMITRY ASTANIN
  • Dmitry Egorov
  • Dmitry Golub
  • DMITRY PESHEKHONOV
  • Dmitry Podgainy
  • Elena Kirpicheva
  • Elena Tikhonenko
  • Elena Zemlyanaya
  • Eugeny Molchanov
  • Evgenia Cheremisina
  • Evgeniy Kuznetsov
  • Evgeny Boger
  • Evgeny Gorbachev
  • Eygene Ryabinkin
  • Fabrizio Furano
  • Garanov Dmitry
  • Gennady Ososkov
  • Georgy Sedykh
  • Ian Bird
  • Ignacio Barrientos Arias
  • Igor Golutvin
  • Igor Pelevanyuk
  • Igor Semenov
  • Igor Semenushkin
  • Igor Tkachenko
  • Ilija Vukotic
  • Ilya Shirikov
  • Irina Filozova
  • Iurii Sakharov
  • Ivan Bednyakov
  • Ivan Filippov
  • Ivan Slepov
  • Ivan Vankov
  • Jan Kundrát
  • Julia Andreeva
  • Konstantin Gertsenberger
  • Ksenia Klygina
  • Lee Sawyer
  • Lidija Zivkovic
  • Livio Mapelli
  • Lubomir Dimitrov
  • Ludmila Kapustina
  • Maksim Bashashin
  • Marat BIKTIMIROV
  • Maria Grigorieva
  • Massimo Lamanna
  • Maxim Karetnikov
  • Mikhail Belov
  • Mikhail Borodin
  • Mikhail Buryakov
  • Mikhail Korotkov
  • Milos Lokajicek
  • Mohammad Al-Turany
  • Nadezhda Tokareva
  • Nataliia Kulabukhova
  • Nedaa Asbah
  • Nichita Degteariov
  • Nikita Balashov
  • Niko Tsutskiridze
  • Nikolay Gorbunov
  • Nikolay Kutovskiy
  • Oksana Streltsova
  • Oksana Tyaglaya
  • Oleg Strekalovsky
  • Olga Gerget
  • Olga Tyatyushkina
  • Olga Ustimenko
  • Oliver Keeble
  • Oxana Smirnova
  • Patrick Fuhrmann
  • Peter Bogatencov
  • Petr Zrelov
  • Roman Semenov
  • Ryan White
  • SEBASTIAN BUKOWIEC
  • Sergei Andrianov
  • Sergey Manoshin
  • Stanislav Pakulyak
  • Stefan Motycak
  • Stepan Vereschagin
  • Svetlana Murashkevich
  • Tadeusz Kurtyka
  • Tatiana Strizh
  • Tatiana Zaikina
  • Tatsuya Mori
  • Thurein Kyaw Lwin
  • Tigran Mkrtchyan
  • Tushov Evgeny
  • Vadim Bednyakov
  • Valeriy Parubets
  • Valery Mitsyn
  • Vasily Andreev
  • vasily velikhov
  • Viacheslav Ilyin
  • Victor Chepigin
  • Victor Gergel
  • Victor Grebenyuk
  • Victor Matveev
  • Victor Rogov
  • Victor Zamriy
  • Victoria Belaga
  • Victoriya Osipova
  • Vitaly Yermolchyk
  • Vladimir Borisov
  • Vladimir Dunin
  • Vladimir Karjavine
  • Vladimir Korenkov
  • Vladimir Palichik
  • YANG QIN
  • Yaroslav Tarasov
  • Yulia Karlova
  • Yuri Pepelyshev
  • Yury Panebrattsev
  • Yury Samoylenko
  • Yury Tsyganov
  • Zurab Modebadze
Support
    • Welcome speeches
      • 1
        Welcome of Minister Education
        Speaker: Predrag Boshkovic
      • 2
        Welcome of JINR
        Speaker: Prof. Victor Matveev (JINR)
      • 3
        Welcome of CERN
        Speaker: Dr Tadeusz Kurtyka (CERN)
      • 4
        Welcome of Russian Ambassador
        Speaker: Sergei Gritcai
      • 5
        Welcome of France Ambassador to Montenegro
      • 6
        Welcome of Swiss Ambassador to Montenegro
      • 7
        Welcome of NEC’2015 Local Organizing Committee
        Speaker: Slobodan Backovic
      • 8
        Welcome of NEC’2015 Local Organizing Committee
        Speaker: Andrey Khrgian
      • 9
        Welcome of sponsors (IBS Platformix, Jet Infosystems, Niagara)
    • 10
      The JINR Scientific Program
      Speaker: Prof. Victor Matveev (JINR)
      Slides
    • 11
      The CERN scientific programme – is there life after Higgs?
      Although the flagship of CERN physics is the Large Hadron Collider (LHC), the CERN scientific programme is varied and diversified. It extends to low-energy nuclear physics, antiproton experimentation and fixed target experiments at intermediate energies. After the Higgs discovery in 2012, an intense activity has started to prepare for the future. While the high priority still remains the LHC with an important investment foreseen for the upgrade to high-luminosity, attention is being put to diversifying the scientific programme (e. g. neutrino physics) and to the high-energy frontier (linear collider studies and future circular collider studies). In this time following the Higgs discovery, physicists need to remain more then ever open, looking for the answer to the fascinating question: Is there life after Higgs?
      Speaker: Dr Livio Mapelli (CERN)
      Slides
    • 11:00
      Coffee break
    • 12
      Collaboration of CERN with CIS and South-East-European countries
      Speakers: Dr Christoph Schaefer (CERN), Dr Tadeusz Kurtyka (CERN)
      Slides
    • 13
      The evolution of the WLCG Grid
      The Worldwide LHC Computing Grid (WLCG) has been in production for more than 10 years supporting the preparations for, and then the first run of the LHC. It has shown itself to be one of the pillars of the infrastructure necessary to enable the rapid production of physics results from the LHC, and has been in constant use at a very high load since its first introduction. However, even from the first months of real data flowing in 2010, the computing models and the WLCG infrastructure itself have been evolving to adapt to the realities of real data, and the real use cases of the experiments. In particular, the data management services have responded to the significant capabilities of the global network available to LHC, far above that anticipated, and the requirement to optimise data placement and movement. Concepts such as global data federations and intelligent data placement and caching have been introduced. In recent years, virtualisation and cloud technologies have become more and more important and are now an important piece of the WLCG technology. Since the experiments and WLCG itself receive offers of computing not just from the pledged resources, but also in the form of opportunistic resources in private and pulic clouds, in HPC machines, and various other sources such as volunteer computing, the foreseen evolution of WLCG must be to make use of this pool of opportunity, and to not restrict itself to “grid” or “cloud”, but to adapt and easily incorporate heterogeneous resources as they are made available. This talk will summarise the experience of Run 1, and how the WLCG is anticipated to evolve during Run 2 and in preparing for the LHC and detector upgrades.
      Speaker: Ian Bird (CERN)
      Slides
    • 12:40
      LUNCH
    • 14
      Large-scale data services for science: present and future challenges
      CERN IT operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. Managed disk-storage amounts to about 100 PB (with relative ratios 1:10:30). EOS deploys disk resources evenly across the two CERN computer centres (Meyrin and Wigner). The physics data archive (CASTOR) contains about 100 PB so far. We are also providing sizeable resources for general IT services most notably OpenStack and NFS clients. This is implemented with a Ceph infrastructure for a total capacity of ~1 PB (which we scaled up for testing by a factor of 10). Recently a new service, CERNBOX, has been added to provide file synchronisation and sharing functionality (more than 2000 users). We will describe the operational experience and plans for the future - Data services for LHC data taking (new roles of CASTOR and EOS) - Experience in deploying EOS across multiple sites - Experience in coupling commodity and home-grown solution (e.g. Ceph disk pools for AFS, CASTOR and NFS) - Future evolution of these systems in the WLCG realm and beyond, especially with the promising field of cloud synchronisation systems
      Speaker: Dr Massimo Lamanna (CERN)
      Slides
    • 15
      Status and perspectives of Laboratory of Information Technology at JINR
      The report introduces the status and evolution of the information technologies at JINR. The objective of Laboratory of Information Technologies activity is to provide a further development of the JINR network and information infrastructure asked by the research and production activity of JINR and its Member States using the most advanced information technologies. The existing Central Information and Computing Complex of JINR is evolving into the Multifunctional Centre for Data Storage, Processing and Analysis aimed at providing to its users a wide range of possibilities through its main components: a grid-infrastructure at Tier-1 and Tier-2 levels devoted to the support of the LHC experiments (ATLAS, Alice, CMS, LHCb), FAIR (CBM, PANDA), NICA(JINR) and other large-scale experiments; a general purpose computing cluster; a cloud computing infrastructure; the heterogeneous computing cluster HybriLIT; an education and research infrastructure for distributed and parallel computing. Particular attention is paid to the creation of a unified information environment integrating a number of various technological solutions, concepts and techniques. Such an environment should integrate supercomputer (heterogeneous), grid- and cloud-complexes and systems in order to grant optimal approaches for solving various types of scientific and applied tasks. Necessary requirements to such an environment are scalability, interoperability and adaptability to new technical solutions. The unified environment is a complex hardware-software complex which operates 12 months a year in a 24х7 mode and uses a big variety of architectures, platforms, operational systems, network protocols and software products. The interface of the uniform environment should provide a way for organization of collective development, solution of problems of various complexity and subject matter, management and processing of data of various volumes and structures, training and organization of scientific and research processes
      Speaker: Dr Vladimir Korenkov (JINR)
      Slides
    • 16
      Status of the NICA project at JINR
      The scientific program and current status of the project NICA realization is presented in the report. A new scientific project NICA (the Nuclotron based Ion Collider facility) is now under the preparation at the Joint Institute for Nuclear Research (JINR) in Dubna. The project is aimed at two scientific programs: the study of the hot and dense baryonic matter under extreme conditions and at the search for the phase transitions, and at the investigation of nucleon spin. Heavy ion program will be held in the energy range up to √sNN = 11 GeV/n at average luminosity of L =1027 cm-2s-1 for 192Au79+ nuclei. The energy in polarized beam collisions will rich √sNN = 27 GeV for protons and √sNN = 13.2 GeV/n for deuterons at luminosity L = 1032 cm-2s-1. The accelerator facility of the NICA complex is based on the existing superconducting synchrotron - the Nuclotron, and consists of a set of ion sources - KRION-6T, SPP and others, two linacs - HILac and LU-20, buster synchrotron and two superconducting collider rings. The scientific program will be realized on the Nuclotron extracted beams (BM@N experiment) and in the collider mode (MPD and SPD experiments).
      Speaker: Dr Dmitry Peshekhonov (JINR)
      Slides
    • 15:40
      Coffee break
    • 17
      Virtualization of computations - new approaches and technologies: from data storage systems to desktops
      Speaker: Mr Alexander Paramonov (Candidate of technical Science, MBA, ACC)
    • 18
      The main approach to Big Data parallel processing: Oracle way
      Speaker: Dr Alexey Struchenko (Jet Infosystems)
      Slides
    • 19
      Supermicro/Niagara Innovation Technologies
      Speaker: Dmitry Garanov (Niagara, Moscow)
      Slides
    • 19:00
      Welcome Party
    • Detector & Nuclear Electronics
      Convener: Prof. Ivan Vankov (Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences)
      • 20
        Radiation Monitoring of the GEM Muon Detectors at CMS
        The higher energy and luminosity of future High Luminosity (HL) LHC, determines a significant increasing of the radiation background around the CMS subdetectors, and especially in the higher pseudorapidity region. Under such heavy conditions, the RPC (used in muon trigger) most probably could not operate effectively. A possible better solution is the so-called GEM (Gas Electron Multiplier) detector, whose tests at the CMS will be realized in near future. A monitoring system to control the absorbed radiation dose by the GEM under test is developed. Two types of sensors are used in it: RadFETs for total absorbed dose and p-i-n diodes for particle (proton and neutron) detection. The basic detector unit, called RADMON, contains two sensors of each type and can be installed at each GEM detector. The system has a modular structure, permitting to increase easily the number of controlled RADMONs: one module controls up to 12 RADMONs, organized in three groups of four and communicates with the control system using RS485 and/or CANBUS interfaces.
        Speaker: Dr Lubomir Dimitrov (Institute for Nuclear Research and Nuclear Energy)
        Slides
      • 21
        Trigger Module for Spectrometer with DT5742 Digitizers
        High speed switched capacitor waveform digitizers are increasingly used in studies of rare events in nuclear physics. Digitizers complement the classic analog input systems or completely replace it. To launch the start of registration required trigger signal that determines an interesting event. Discriminator's threshold levels are set individually via USB 2.0. Trigger signal generating logic is programmed in the FPGA chip. The number of input channels of module is 32. Features of realization of the spectrometer «COMETA-F» trigger module are described in this article.
        Speaker: Dr Oleg Strekalovsky (JINR)
        Slides
      • 22
        Status of the Front-end-Electronics based on the NINO ASIC for the Time-of-Flight measurements in the MPD
        A conceptual design of the MultiPurpose Detector (MPD) is proposed for a study of hot and dense baryonic matter in collisions of heavy ions over the atomic mass range A = 1–197 at a centre-of-mass energy up to √(s_NN ) = 11 GeV (for Au79+). The MPD experiment is foreseen to be carried out at a future JINR accelerator complex facility for heavy ions – the Nuclotron-based Ion Collider fAcility (NICA) which is designed to reach the required parameters with an average luminosity of L = 1027cm−2s−1. Ambitious physics goals of MPD require excellent particle identification capability over as large as possible phase space volume. Charged particles in a large momentum range are identified in the MPD by the Time of Flight (ToF) detector. Overall time resolution should be better than 100 ps. For the TOF system of the MPD the Multigap Resistive Plate Chambers (MRPC) with a strip readout are used. Very important part of the high performance of the ToF system is a readout electronics. For the full exploitation of the excellent timing properties of the Multigap Resistive Plate Chamber, front-end-electronics (FEE) with special characteristics are needed. The signals from the MRPCs must be amplified and discriminated as fast as possible without lossless. A signal is read from two sides of the strip, which makes problems like a compatibility the FEE . A NINO application-specific integrated circuit (ASIC) has been decided as a base of front-end-electronics. The NINO ASIC developed by the CERN LAA project, which combines a fast amplifier, discriminator and stretcher. Preamplifier board, based on the NINO ASIC, designed in the Laboratory for High Energy Physics (LHEP) for compatibility used in the ToF-MPD MRPC. According to the results of bench tests preamplifier board showed a stable work and good time resolution >10 ps. Also tested with the detector beam of the Nuclotron and achieved time resolution of the system Electronics-Detector ~ 55 ps.
        Speaker: Mr Mikhail Buryakov (JINR, LHEP)
        Slides
      • 23
        Magnetic measurement system for series production of NICA superconducting magnets. Data acquisition, control and data analysis.
        The Nuclotron-based Ion Collider fAcility (NICA) is the new accelerator complex being constructed at JINR. More than 250 superconducting (SC) magnets will be assembled and tested at the new test facility in the Laboratory of High Energy Physics JINR. Magnetic measurements system for NICA booster dipole magnets was built and commissioned at late 2013. First cryogenic measurements of dipole magnets was done at late 2014. Magnetic measurements system, its data acquisition and control system are described. Data analysis procidures and first results of cryogenic magnetic measurements are presented and discussed.
        Speaker: Mr Vladimir Borisov (JINR)
        Slides
    • Workshop "From Local File Catalog to Name space publisher + meta-catalog"
      slides
    • 11:05
      Coffee break
    • Detector & Nuclear Electronics
      Convener: Dr Igor Semenov (Project Center ITER (Russian Domestic Agency ITER))
      • 24
        Electronic devices for multichannel setups in FLNR.
        There have been developed some setups for super heavy elements synthesis in FLNR including multi-detector spectrometers of nuclear reaction products. These setups are VASSILISSA, DGFRS (Dubna Gas Filled Recoil Separator), MASHA etc. The number of channels in such spectrometers is growing up continuously and now is about several hundreds. Electronics for such spectrometers should be preferable standardized but suitable for using in different setups. There is the significant matter to be applied to soft-hardware for fast check and control of facility parameters of all spectrometric channels during adjustment for experiment. Such software being developed in Qt (Windows, MinGW 32 bit compiler) is presented. In this article are presented a block diagram and short description of basic electronic devices used in spectrometric channel (Amplifier, ADC) and brand new devices of this series. New approach to the problem of time triggering of global clock in data acquisition system and the real use-case in FLNR are presented . Here to be introduced the block diagram of multichannel programmable simulator of analog signals as a part of soft-hardware check for spectrometric channels.
        Speaker: Aleksey Kuznetsov (JINR)
      • 25
        NEW BEAM DIAGNOSTIC SYSTEM FOR MASHA SETUP
        New beam diagnostic system, based on PXI standard, was developed, tested and used in the experiment for MASHA setup. The beam energy and beam current measurement is realized using a few different methods. Online time of flight energy measurement was done using three pick-up detectors. The distance between the first pair of detectors was 2 meters and between the second pair of detectors 11 meters. High frequency signal generated between preamplifier and pick up sensors was damp by fine preamplifier adjustments. We used two electronics systems to measure time between pick-ups. First of them is based on the fast Agilent digitizers (2 – channels 4 GHz sampling rate) and the second one on constant fraction discriminator connected to TDC (5 ps resolution). Saving signals and signal processing is possible by using digitizers. In this case we use many mathematical algorithms to determine peak position and Fast Fourier Transformation for frequency signal processing. System was calibrated by comparing signal delay between the first and the second pick up detector. New graphical interface for controlling of electronics devices and online calculating energy was developed in MFC C++. It was also used the second system based on microchannel plate’s time of flight detectors and silicon detector for determination of beam energy and type of accelerated particles. Time of flight measurement has 100 ps time resolution and energy difference between each one used system is less than 1.5%. The beam current measurement is realized by two different sensors. The first one is Rotating Faraday cup before target and second one is emission detector after target. Detectors are connected to 50 uA range power suppliers made in JINR and controlled by LabWiev software developed by our group. Both systems are synchronized with our data acquisition system. The information on beam energy and beam current is included in data event. This system is now used in the experiments for super heavy elements synthesis on cyclotron U400M in Flerov Laboratory of Nuclear Reactions (FLNR). This work was supported by the Russian Foundation for Basic Research. Grant no. 13-02-12089-ofi_m.
        Speaker: Mr Stefan Motycak (JINR)
        Slides
      • 26
        Groundbased complex for checking the optical system of the TUS experiment
        The purpose of the TUS space experiment is to study cosmic rays of ultrahigh energies by registration of the generated extensive air showers using a satellite in space. The concentrator located on the satellite is made in the form of a Fresnel mirror directed toward the earth's atmosphere, and at its focus there is a photodetector. The angle of view of the mirror is ± 50, that for the set height of the satellite's orbit corresponds to an area of 80x80 km2 on ground. A ground complex consisting of a number of stations is being constructed in order to control the optical system of the experiment (the number of stations and their location will be determined with account of the satellite's actual orbit after it is launched). Each station consists of a light source, an optical system forming a light beam, a GPS receiver and a microcontroller, that generates a sequence of light signals in time and controls the led driver. The work is supported by the RFBR grant 15-02-05498.
        Speaker: Dr NIKOLAY GORBUNOV (JINR)
    • 12:10
      LUNCH
    • Triggering, Data Acquisition, Control Systems
      Convener: Livio Mapelli (CERN)
      • 27
        Status of the Nuclotron and NICA control system development.
        Nuclotron is a 6 GeV/n superconducting proton synchrotron operating at JINR, Dubna since 1993. It will be the core of the future accelerating complex NICA which is under construction now. The TANGO based control system of the accelerating complex is under development now. The report describes its structure, main features and present status.
        Speaker: Mr Evgeny Gorbachev (JINR)
        Slides
      • 28
        Multidetector system for nanosecond tagged neutron technology
        At the T(d,n)He4 reaction each 14 MeV neutron is accompanied (tagged) by 3.5 MeV alpha- particle emitted in the opposite direction. A position- and time-sensitive alpha-detector measures time and coordinates of the associated alpha particle which allows determining time and direction of neutron escape. A spectrum of gamma-rays emitted at the interaction of tagged neutrons with nuclei of chemical elements allows identify a chemical composition of irradiated object. The recording of alpha-gamma coincidences in a very narrow time window provides the possibility of background suppression by spatial and time discrimination of events. The Nanosecond Tagged Neutron Technology (NTNT) based on this principle has great potentialities for various application, e.g., for remote detection of explosives, exploration of diamond pipes, etc. For practical realization of NTNT, the time resolution of gamma-rays recording with respect to the alpha- particles recording should be close to 1 ns. The total intensity of signals can exceed 1∙106 1/s from all gamma-detectors and 1∙107 1/s from the alpha-detector. The processing of such stream of data without losses and distortion of information is one of challenging problems of NTNT. At present, two approaches for the DAQ system of NTNT devices are used: (1) preliminary online selection of pulses by hardware and transmission of only useful events to a computer and (2) complete digitization of signals from all detectors and data-stream transmission to the computer for subsequent processing. In this study, we used the first approach based on the hardware selection of useful events according to specified criteria. The main selection criterion is the presence of signals from alpha- and gamma- detectors within the preset time and amplitude ranges in the absence of overlapped events. Several models of DAQ system were produced and their characteristics are examined. The architecture of the multidetector system is considered. The comparison with the “digital” DAQ systems demonstrated the “analog” DAQ provides better timing parameters, energy consumption and limited rate of useful events.
        Speaker: Dr Maxim Karetnikov (All-Russia Research Institute of Automatics)
        Slides
    • 15:10
      Coffee break
    • Triggering, Data Acquisition, Control Systems
      Convener: Dr Nikolay Gorbunov (Jinr)
      • 30
        Development of tools for real-time betatron tune measurement at Nuclotron
        Betatron tune is one of the important beam parameters that must be known and controlled to avoid the beam instability of the circular particle accelerator. A real-time method for betatron tune measurements at Nuclotron and NICA Booster was developed and tested. A bandlimited noise source and chirp (frequency sweep) was used for beam excitation. The transversal beam oscillation signals were sampled either with a constant frequency or with a beam revolution frequency and digitized with a high resolution ADCs. The Fourier transform of the acquired data represents immediately both X and Z betatron tunes. The report presents the current state of the measurement system, beam test results and future improvements.
        Speaker: Mr Dmitrii Monakhov (JINR)
        Slides
      • 31
        The thermometry system of superconducting magnets test bench for the NICA accelerator complex
        Precise temperature control in various parts of the magnet and thermostat is one of the vital problems during cryogenic tests. The report describes design of the thermometry system, developed in LHEP JINR. Hardware consists of resistance temperature detectors of TVO and PT100 types, precision current sources and multi channel high resolution acquisition devices from National Instruments. Software is developed using Tango control system framework. It consists of few Tango modules performing ADC data acquisition, digital filtering, temperature calculations, database storage access and standalone web application providing operator interface. Besides that, the report describes generic software tools, developed for Tango-based control system web client software design.
        Speaker: Mr Georgy Sedykh (JINR)
        Slides
      • 32
        TANGO Standard Software for Nuclotron Beam Slow Extraction Control
        TANGO Controls is a basis of the NICA control system. The report describes the software that integrates the Nuclotron beam slow extraction subsystem into the TANGO system of NICA. Object of control are resonance lenses power supplies and the extracted beam spill controller. The software consists of the subsystem device server, remote client and web-module for the subsystem data viewing. The results of testing the software are presented.
        Speaker: Mr Vasily Andreev (VBLHEP JINR)
        Slides
      • 33
        Low Level Radio Frequency system of NICA linac
        The report describes the features of the development and creation of main oscillator to the high frequency linear accelerator of NICA system. In the report will be: - Presented the principles of construction of the five-chanalled precision generator with signal frequency and phase automatic adjustment. - Examined the principles of frequency adjustments at HiLac resonators - Reported about installation and tuning of the equipment, presented data of equipment testing at linear accelerator at NICA system.
        Speaker: Mr Ilya Shirikov (-)
        Slides
      • 34
        DAQ software in MPD experiment NICA
        Every experiment has its own software. Current report describes DAQ software of BM@N experiment: • Run Control - program that controls (configure, prepare, start/stop) 'Run' execution. • First Level Processor - layer that control data flow from Detector Readout Electronics(DRE), checking and formatting it. • Event-Building systems - buffering data flow, sorting sub-events and distributing of completed events. The report describes present state and plans for MPD NICA experiment.
        Speaker: Mr Ivan Filippov (JINR)
        Slides
      • 35
        L0 Trigger unit prototype for BM@N setup
        The report is focuses on the development of L0 Trigger Unit for BM@N setup.The L0 Trigger Unit (T0U) generates trigger signal based on beam line and target area detector signals. This module also provides both control and monitoring of the detector front-end electronics power supplies. T0U was successfully tested during the BM@N test run with Nuclotron beam in February-March 2015.
        Speaker: Mr Victor Rogov (JINR)
        Slides
      • 36
        Data acquisition electronics at BM@N
        The report describes structure of data acquisition electronics at BM@N. There will be three main parts related to each other. The first one is a short description of electronic modules, their technical characteristics, functionality and detectors with which they were used. The second one describes synchronization method, which was used. In particular, the White Rabbit protocol and its implementation in data acquisition electronics. The third one is about front-end and readout standalone electronic modules for calorimetry. 64-channels modules adc64amp (preamplifier) and ADC64S2 were applied with ZDC and ECAL at BM@N.
        Speaker: Mr Andrey Terletskiy (JINR)
        Slides
      • 37
        Slow Control system at BM@N experiment
        Big modern physics experiments represent a collaboration of workgroups and require wide variety of different electronic equipment. Besides trigger electronics or Data acquisition system (DAQ), there is a hardware that is not time-critical, and can be run at a low priority. Slow Control system are used for setup and monitoring such hardware. Slow Control systems in a typical experiment are often used to setup and/or monitor components such as high voltage modules, temperature sensors, pressure gauges, leak detectors, RF generators, PID controllers etc. often from a large number of hardware vendors. Slow Control system also has to archive revieved data for further analysis and handling by physicists and to warn personnel about critical situations and contingency.
        Speaker: Mr Dmitry Egorov (JINR)
        Slides
    • Triggering, Data Acquisition, Control Systems
      Convener: Dr Maxim Karetnikov (VNIIA)
      • 38
        New Analog Electronics for the New Challenges in the SHEs Synthesis
        The new series of the experiments aimed at the synthesis and decay properties studying both the most neutron-deficient isotopes of element Fl (Z = 114) and the heaviest isotopes of 118 element have being planned at the DGFRS (FLNR JINR). An appropriate registering system should be implemented to serve spectrometric data coming from the full absorption double-sided silicon strip detector (DSSSD. Thus, new analog modules were designed allowing to simplify existing multi-channel measurement system and to improve the real-time “active correlation” method of the searching for the SHE’s formation rare events. The main features of the new modules of the 16-channel charge-sensitive preamplifier, 16-channel analog multiplexer and 1.25 MSPS 12-bit Parallel ADC are presented.
        Speaker: Mr Alexey Voinov (JINR)
        Slides
      • 39
        DeLiDAQ-2D ─ a new data acquisition system for position-sensitive neutron detectors with delay-line readout
        Frank laboratory of Neutron Physics, Joint Institute for Nuclear Research, Dubna, Russia Software for a data acquisition system of modern one- and two-dimensional position-sensitive detectors with delay-line readout, which includes a software interface to a new electronic module DeLiDAQ-2D with a USB interface, is presented. The new system after successful tests on the stand and on several spectrometers of the IBR-2 reactor has been integrated into the software complex SONIX [1]. The DeLiDAQ-2D module [2] contains an 8-channel time-code converter (TDC-GPX) with a time resolution of 80 ps; field programmable gate array (FPGA) where the firmware stored and executed, performing logical operations, selection and filtering of events; 1 Gbyte histogram memory, which makes it possible to accumulate three-dimensional spectra X-Y-TOF of up to 512×512×1024 32-bit words; and high-speed interface with fiber-optic communication line. The real count rate (with consideration for data transfer and recording to a PC) is no less than 106 events/s. The DeLiDAQ-2D module is implemented in standard NIM. The DeLiDAQ-2D module can operate in two modes: histogram mode (on-line sorting and accumulation of spectra in the internal memory) and list mode (accumulation of raw data immediately on a computer disk). 1.Sonix+, http://sonix.jinr.ru 2.A.Belushkin et al. 2D Position-sensitive detector for thermal neutrons. Proceedings of the XXΙ International Symposium on Nuclear Electronics & Computing, NEC’2007., JINR E10,11-2007-119, Dubna, 2008, p.p. 116-120.
        Speaker: Ms Svetlana Murashkevich (JINR)
        Slides
      • 40
        Data acquisition system for focal plane detector of mass separator MASHA
        One of the significant changes during last years at mass spectrometer MASHA (Mass-Analyzer of Super Heavy Atoms), located at JINR Flerov Laboratory of Nuclear Reactions, was upgrade of the data acquisition system. The main difference from previous CAMAC DAQ is in using new modern platform – National Instruments PXI with XIA multichannel high speed digitizers (250MHz 12 bit 16 channels). There are 448 spectrometric channels. Each channel has its charge sensitive preamplifier which are grouped by 16 in a load balance way. Grouped channels are connected to multiplexers-amplifiers and after that to digitizers. Preamplifiers and multiplexers-amplifiers are designed at FLNR JINR [1]. Software for data acquisition is written in C++ and consist of two main parts. First one is run at PXI controller for collecting and storing data from digitizers located at experimental hall and the second one is a viewer at PC for on-line and off-line data analysis located at MASHA control room. New DAQ system expands precision measuring capabilities of alpha decays and spontaneous fission at focal plane position sensitive silicon strip detector what in turn increases capabilities of the setup in such field as low yield elements registration. The work was supported by the Russian Foundation for Basic Research, grant no.‑13‑02‑12089‑ofi_m. Keywords: DAQ, data acquisition, mass-spectrometry, super heavy elements 1. A. Kuznetsov, E. Kuznetsov, Electronic devices for constructing a multichannel data acquisition system. Proceedings of the XXII International symposium NEC”2009, Varna, pp. 173‑179.
        Speaker: Aleksey Novoselov (JINR)
        Slides
      • 41
        ESIS KRION-6T beam emittance measurement device
        The work is devoted to study and development of the ESIS KRION-6T beam emittance measurement device using sectional ion collector method. In the course of the work, the charge measurement possibility using multichannel ADC with current input was researched. MCU-based data acquisition system was designed. The system tests was carried out.
        Speaker: Mr Dmitriy Ponkin (LHEP JINR)
        Slides
      • 42
        Host-based data acquisition system to control pulsed facilities of the accelerator
        The talk discusses development of the host-based systems for carrying out measurements and data acquisition to control a great number of pulse parameters and pulsed facilities of accelerators. We consider possible modes of timing and allocation of measuring operations and storage, processing and the data output for groups of channels, or tasks. The time period or intensity of operations and the number of the data for the tasks are coordinated with the system characteristics of the channels. Estimations of a period of waiting time and the data-flow rate demonstrate performance of the system. The technique is developed to provide checking of groups of pulse parameters and control the facilities of the linear accelerator of electrons LUE-200 for the neutron source IREN at JINR (Dubna).
        Speaker: Dr Victor Zamriy (JINR)
        Slides
      • 43
        Automatization of control channel 8 of Phasotron at DLNP of JINR
        This article is presenting software and hardware parts of the automatization project of control channel 8 lenses focusing of Phasotron at DLNP of JINR. The article describes goals, concepts and features of the software, developed with Python and QT.
        Speaker: Andrey Yudin (Vladimirovich)
        Slides
      • 44
        Creating interactive video broadcast system at VBLHEP
        Speaker: Mr Ivan Slepov (JINR)
        Slides
    • ATLAS DAQ
      Convener: Dr DMITRY PESHEKHONOV (JINR)
      • 45
        Real-time flavour tagging selection in ATLAS
        In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets, while retaining a high efficiency on selecting b-jets and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. We present an overview of the ATLAS strategy for online b-jet selection for the LHC Run 2, including the use of novel methods and sophisticated algorithms designed to face the above mentioned challenges. A first look at the performance in Run 2 data is shown and compared to the performance during the Run 1 data-taking campaign. The ATLAS FastTracKer (FTK) system does global track reconstruction after each level-1 trigger to enable the high-level trigger to have early access to tracking information. We present the status of the FTK commissioning (expected to be completed in 2016) and discuss how the system can be exploited to improve the current b-jet trigger performance.
        Speaker: Lidija Zivkovic (Institute of Physics Belgrade, Belgrade, Serbia)
        Slides
      • 46
        The ATLAS Jet Trigger Software and Performance for LHC Run 2
        The new centre of mass energy and high luminosity conditions during Run 2 of the Large Hadron Collider impose ever more demanding constraints on the ATLAS online trigger reconstruction and selection system. To cope with these conditions, the hardware-based Level-1 trigger now includes a Topological Processor and the software-based High Level Trigger has been redesigned, merging the two previously separate Level-2 and Event Filter steps. In the new joint software processing level, algorithms run in the same computing nodes, thus sharing resources, minimizing the data transfer from the detector buffers and increasing the algorithm flexibility. The selection of events containing jets is uniquely challenging at a hadron collider where nearly every event contains significant hadronic activity. It is, however, of crucial importance to explore many physics topics in the new kinematic regime. The ATLAS Jet Trigger software was mostly rewritten to adapt to the new High Level Trigger, while taking into account past experience from Run 1. The upgraded system profits from a much greater re-use of the precise but costly offline software base, a more robust configuration infrastructure, and two alternative schemes for reading the whole or part of the calorimeter data in real time. This presentation will describe the upgraded ATLAS Jet Trigger, detailing some of its design choices, and will show the first trigger results from real Run 2 data.
        Speaker: Lee Sawyer (Louisiana Tech University, UK)
      • 47
        The Upgrade of the ATLAS Electron and Photon Triggers towards LHC Run 2 and their Performance
        Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for signal selection in a wide variety of ATLAS physics analyses to study Standard Model processes and to search for new phenomena. Final states including leptons and photons had, for example, an important role in the discovery and measurement of the Higgs particle. Dedicated triggers are also used to collect data for calibration, efficiency and fake rate measurements. The ATLAS trigger system is divided in a hardware-based (Level 1) and a software based high level trigger (HLT), both of which were upgraded during the long shutdown of the LHC in preparation for data taking in 2015. The increasing luminosity and more challenging pile-up conditions as well as the planned higher center-of-mass energy demanded the optimisation of the trigger selections at each level, to control the rates and keep efficiencies high. To improve the performance multivariate analysis techniques are introduced at the HLT. The evolution of the ATLAS electron and photon triggers and their performance will be presented, including new results from the early days of the LHC Run 2 operation.
        Speaker: Ryan White (Universidad Técnica Federico Santa María, Valparaíso, Chile)
        Slides
    • 09:45
      Coffee break
    • ATLAS DAQ
      Convener: Dr Lee Sawyer (Louisiana Tech University)
      • 48
        The design and performance of the ATLAS Inner Detector trigger for Run 2
        The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm with the early LHC Run 2 data are discussed. During the 2013-15 LHC shutdown, the HLT farm was redesigned to run in a single HLT stage, rather than the two-stage (Level 2 and Event Filter) used in Run 1. This allowed a redesign of the HLT ID tracking algorithms, essential for nearly all physics signatures in ATLAS. The redesign of the ID trigger, required in order to satisfy the challenging demands of the higher energy LHC Run 2 operation, is described. The detailed performance of the tracking algorithms with the initial Run 2 data is discussed, for the different physics signatures. This includes both the physics object reconstruction and timing performance for the algorithms running on the redesigned single stage ATLAS HLT Farm. Comparison with the Run 1 strategy are made and demonstrate the superior performance of the strategy adopted for Run 2.
        Speaker: Dr Yang Qin (University of Manchester, UK)
        Slides
      • 49
        A Hardware Fast Tracker for the ATLAS trigger
        The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 10^34 cm^-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC is restarting in 2015 with much higher instantaneous luminosity and this will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer is part of the ATLAS trigger upgrade project; it is a hardware processor that will provide, at every level-1 accept (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV/c. Providing fast extensive access to tracking information, with resolution comparable to the offline reconstruction, the Fast Tracker will for example help the High Level Trigger system in the precise detection of the primary and secondary vertices, to ensure robust selections and improve the trigger performance. The Fast TracKer will exploit hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high speed communication links. We present the architecture of the FTK system, the results from integration tests and discuss the expected physics performance in the harsh environment of high pile-up and high luminosities expected for upcoming LHC run-2.
        Speaker: Needa Asbah (DESY, Hamburg, Germany)
        Slides
      • 50
        Phase-I Trigger Readout Electronics Upgrade of the ATLAS Liquid-Argon Calorimeters
        The Large Hadron Collider (LHC) is foreseen to be upgraded during the shut-down period of 2018-2019 to deliver about 3 times the instantaneous design luminosity. Since the ATLAS trigger system, at that time, will not allow an increase of the trigger rate an improvement of the trigger system is required. The ATLAS LAr Calorimeter read-out will therefore be modified and digital trigger signals with a higher spatial granularity will be provided to the trigger. The new trigger signals will be arranged in 34000 so-called Super Cells which achieves a 5-10 better granularity than the trigger towers currently used and allows an improved background rejection. The Super Cell read-out is composed of custom developed 12-bit combined SAR ADCs in 130 nm CMOS technology which will be installed on-detector in a radiation environment and digitizes the detector pulses at 40 MHz. The data will be transmitted to the back-end using a custom serializer and optical converter applying 5.44 Gb/s optical links. These components are installed on 124 LAr Trigger Driver Boards (LTDB) each handling up to 320 Super Cell channels. The back-end system will receive the digitized data at a total rate of 25 Tb/s. LAr Digital Processing Boards (LDPBs) equipped with four Arria-10 FPGAs are foreseen to perform digital signal processing in real-time for precise energy reconstruction, pile-up suppression and identification of the correct bunch-crossing time. Each of the 32 LDPBs handles about 1100 Super-Cells on average. In order to test the full functionality of the future LAr trigger system, a demonstrator set-up has been installed on the ATLAS detector and is operated in parallel to the regular ATLAS data taking during the LHC Run-2. One Front-End Crate (FEC) covering a region of Δη×Δφ=1.4 × 0.4 of one LAr half-barrel is equipped with two prototype versions of the LTDB using commercial TI ADS5272 ADCs, and the data are received by two prototype LDPB boards implementing Stratix V FPGAs. The LDPBs are operated in a commercial Advanced Telecommunications Computing Architecture (ATCA) shelf system. The talk will give an overview of the Phase-I Upgrade of the ATLAS LAr Calorimeter readout and of the custom developed hardware including their role in real-time data processing and fast data transfer. Performance results from the prototype boards in the demonstrator system will be reported with first measurements of noise levels and system linearity.
        Speaker: Mr Tatsuya Mori (The University of Tokyo)
        Slides
    • 10:50
      Coffee break
    • Non-relational databases and heterogeneous repositories
      Convener: Andreas Peters (CERN)
      • 51
        Evolution of the use of relational and NoSQL databases in the ATLAS experiment.
        The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of “NoSQL” databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This talk will describe this technology evolution in the ATLAS database infrastructure and present a few examples of large database applications that benefit from it.
        Speaker: Prof. Dario Barberis (University and INFN Genova (Italy))
        Slides
      • 52
        The unified database for the fixed target experiment BM@N
        Today the use of databases is a prerequisite for qualitative management and unified access to the data of modern high-energy physics experiments. The developed database describing in this report is designed as comprehensive data storage for the ongoing sessions of the fixed target experiment BM@N at the Joint Institute for Nuclear Research. The structure and purposes of the BM@N facility will be briefly presented. The BMNRoot software of the experiment will be noted. The scheme of the developed database and its parameters will be described in the presentation in detail. The use of the unified database implemented on the MySQL DBMS allows to provide user access to the actual information of the experiment: run parameters, BM@N detector geometry, possible, changing during the session, the experimental data obtained, etc. It avoids the multiple duplication and use of outdated data in different subdivisions of the Joint Institute for Nuclear Research. The implemented automatic backup of the unified database ensures that all stored data of the experiment won’t be lost due to software or hardware failure. Also the interfaces for the access to the developed database will be described in the presentation. One was implemented as the set of the specialized C++ classes of the BMNRoot software to access to data without SQL statements, the other – WEB-interface being available on the Web page of the BM@N experiment. At the end the report will conclude the possibility of using the unified developed database in other high-energy physics experiments.
        Speaker: Mr Konstantin Gertsenberger (JINR)
        Slides
      • 53
        Parallel Database support for Distributed Computing
        The problem of big data is becoming increasingly important in our time. This is the most effective solution for the storage and data processing. Lately there is an interest for the same approach for data supply in large scale computations. It is based on the use of parallel database management systems (DBMS), providing parallel processing on requests of Distributed computing systems. There are several commercial solutions, which allow to process large amounts of data, but until recently they have focused on one or other specific hardware platform (DB2 Parallel Edition, NonStop SQL, NCR Teradata, Oracle RAC, Greenplum, and others.) and are not suitable for mass use in the Distributed systems. In addition, these solutions are expensive commercial products. This work is dedicated to the development of parallel database computing system for data supply in large scale computations, which should outplay commercial counterparts in performance.
        Speaker: Mr Thurein Kyaw (Lwin)
      • 54
        NICA Project Management Information System
        The science projects growth changing the criteria for their efficiency due to project implementation be in need of not only level increasing of the management specialization but also pose the problem of choosing the effective planning methods, deadlines monitoring and participants interaction involved in research projects. This paper is devoted to the choosing of project management information system of the new heavy ion collider NICA (Nuclotron based Ion Collider fAcility). We formulate the requirements for the project management information system with taking into account the specifics of the Joint Institute for Nuclear Research (JINR, Dubna, Russia), as an international intergovernmental research organization, which is developed on the basis of a flexible and effective information system for NICA project management.
        Speaker: Maksim Bashashin (JINR)
        Slides
      • 55
        Concept of JINR Corporate Information System
        The article presents the vision of JINR Corporate Information System (JINR CIS): analysis of the current situation, goals and objectives, business requirements, functional requirements, the system structure, assumptions and dependencies and other factors. The special attention is given to the information support of scientific research - Current Research Information Systems as part of the corporate information system. The objectives of such a system are focused on ensuring and the effective implementation of the scientific research by using the modern information technology, computer technology and automation, creation, development and integration of digital resources on a common conceptual framework. The main task of the corporate information system is a complex support of organization's business processes. The main areas of information are: 1. Information support of scientific activities. 2. Information support of management and organizational activities of subdivisions. 3. Information support and maintenance management tasks. 4. Interaction with external information systems. JINR corporate information systems means a system aimed at supporting the integrated information space of distributed digital resources and a set of software and hardware to ensure their effective use and full-featured management. JINR CIS must provide the widest possible coverage of all sides of Management Institute, in stages integrating existing information systems, providing a performance of necessary functions. Project assumes continuous system development, introduction the new information technologies to ensure the technological system relevance.
        Speaker: Irina Filozova (JINR)
        Slides
    • 12:40
      LUNCH
    • Excursion
    • Distributed Computing. GRID & Cloud computing
      Convener: Dr Mohammad Al-Turany (GSI/CERN)
      • 56
        CERN LHC run 2 on OpenStack
        The continuous growth of luminosity in high energy physics with the LHC restart in 2015 results in larger amount of data to be analysed and a corresponding increase in computing power. Given these challenges, we have adopted a number of open source projects used by other large scale deployments elsewhere and contributed to those communities. In particular, OpenStack was chosen as the solution to cope with the need of managing larger computing resources with the same manpower, multiple use cases and of being able to easily scale up the system. OpenStack is nowadays the fastest growing open cloud solution, building software that powers public and private clouds for a wide range of organizations, including Walmart, Comcast, Cisco, eBay, HP, Intel and Rackspace. The CERN OpenStack cloud has been in production since July 2013 growing to over 5,000 Hypervisors to meet the needs of LHC Run 2. In this presentation, an overview of the OpenStack implementation at CERN will be given, with particular emphasis on flexibility, use cases and challenges.
        Speaker: Mr SEBASTIAN BUKOWIEC (CERN)
        Slides
      • 57
        Desktop supercomputer: what can it do?
        To have computing power of large system in hand was a dream of computational scientists for a long time. There were a lot of very interesting proposals in that direction, but there always were bottlenecks, that managed to ruin the original idea. We review some of those problems and argue that new technologies can bring solutions at least to majority of them. The use of cloud technologies transfer those problems to purely on technical level and we describe solutions to most important ones – data transfer overheads, operational environment and load balancing. All solutions are illustrated on a very popular heterogeneous system CPU+GPGPU. Despite all favorable results such system still cannot substitute General Purpose systems. So we formulate a new approach for algorithms in heterogeneous systems. It consists in two steps – finding proper variables for optimal distribution of a problem over the computational system and building a virtual cluster for optimal mapping the algorithm on it. We give three examples of realization of such approach for popular physical problems.
        Speaker: Prof. Alexander Bogdanov (St.Petersburg State University)
        Slides
      • 58
        Migration of the WLCG monitoring infrastructure to a new technology stack
        Monitoring the WLCG infrastructure requires to gather and to analyze high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen extension of the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. The evolution of the WLCG monitoring systems consists of moving to a new architecture and a new technology stack which provide a solution for scalable and close to realtime distributed data processing. The contribution describes the new architecture, the status of migration and lessons learned during technology evaluation and the migration process.
        Speaker: Ms Julia Andreeva (CERN)
        Slides
      • 59
        Simulation concept of NICA-MPD-SPD Tier0-Tier1 computing facilities
        The simulation concept for grid-cloud services of contemporary HENP experiments of the Big Data scale was formulated in practicing the simulation system developed in LIT JINR Dubna. This system is intended to improve the efficiency of the design and development of a wide class of grid-cloud structures by using work quality indicators of some real system to design and predict its evolution. For these purposes the simulation program are combined with real monitoring system of the grid-cloud service through a special database (DB). The DB accomplishes acquisition and analysis of monitoring data to carry out dynamical corrections of the simulation. Such an approach allows to construct a general model pattern which should not depend on the concrete simulated object, while parameters describing this object can be used as input to run the pattern. The simulation of some processes of the NICA-MPD-SPD Tier0-Tier1 distributed computing is considered as an example of our approach applications.
        Speaker: Prof. Gennady Ososkov (JINR)
        Slides
    • 10:50
      Coffee break
    • Distributed Computing. GRID & Cloud computing
      Convener: Dr Patrick Fuhrmann (DESY)
      • 60
        Study of ATLAS TRT performance with GRID and supercomputers.
        After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) in one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG resources are fully utilized and it is important to integrate opportunistic computing resources such as supercomputers, commercial and academic clouds no to curtail the range and precision of physics studies. One of the most important study dedicated to be solved on a supercomputer is reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. This studies are made for ATLAS TRT SW group. It becomes clear that high-performance computing contributions become important and valuable. An example of very successful approach is Kurchatov Institute’s Data Processing Center including Tier-1 grid site and supercomputer. TRT jobs have been submitted using the same PanDA portal and it was transparent for physicists. Results have been transferred to the ATLAS Grid site. The presented talk includes TRT performance results obtained with the usage of the ATLAS GRID and "Kurchatov" supercomputer as well as analysis of CPU efficiency during these studies.
        Speakers: Dr Alexei Klimentov (Brookhaven National Lab), Mr Dimitrii Krasnopevtsev (National Research Nuclear University MEPhI (RU))
        Slides
      • 61
        EOS - evaluating object drives and non-volatile memory
        The EOS project at CERN is providing large scale storage systems to LHC experiments and many other projects at CERN and beyond. In order to further increase the scalability and availability of the system we are investigating several new technologies such as ethernet connected disk drives and non-volatile memory implementations to further decrease the cost of ownership and the downtime after service problems. In the presentation we will describe the possible benefits, our plans for a joint evaluation with the technology vendors and summarise the first results achieved.
        Speaker: Andreas-Joachim Peters (CERN)
        Slides
      • 62
        Status of the DIRAC Project: overview and recent developments
        Multiple research user communities need to put in common infrastructures their computing resources in order to boost the efficiency of their usage. Various grid infrastructures are trying to help the new users to start doing computations by providing services facilitating access to distributed computing resources. The DIRAC project is providing software for creating and operating such services. Multiple DIRAC installations are functional now in various countries. The project is rapidly evolving by providing access to new types of computing and storage resources. It uses new technologies to ensure better, more scalable and more reactive control of the system. New services for massive computations and data operations are available. In this contribution the overview of the project as well as new recent developments will be presented.
        Speaker: Dr Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
        Slides
    • 12:30
      LUNCH
    • Computations with Hybrid Systems (CPU, GPU, coprocessors)
      Convener: Prof. Alexander Degtyarev (Professor)
      • 63
        HybriLIT : status report
        The paper reviews the present status and the perspectives of development of the heterogeneous computing cluster HybriLIT (http://hybrilit.jinr.ru/) which was put into operation in 2014 at the Laboratory of Information Technologies of JINR. HybriLIT provides possibilities to carry out high performance computing within the Multifunctional Information and Computing Complex in LIT JINR. The current configuration of the cluster includes computational nodes with different types of coprocessors (graphical accelerators (GPU) NVIDIA and Intel Xeon Phi coprocessors) with corresponding installed software. It allows carrying out computations under the use of different parallel programming technologies: CUDA - for computations on computational nodes including GPU; MPI, OpenMP - for computations on the multi-/many-core component of the cluster; OpenMP extensions - for computations on the nodes with Intel Xeon Phi coprocessors. Heterogeneous computations may be done with the use of combined technologies: MPI+CUDA, MPI+OpenMP+CUDA, etc.; and the use of the OpenCL technologies. To make effective use of the new computing architectures, a software and information environment has been developed. It includes services that allow the users to carry out parallel computations, to develop their own applications, to get up-to-date support and to participate in tutorials on parallel programming technologies.
        Speaker: Dr Petr Zrelov (LIT JINR)
        Slides
      • 64
        Virtual Accelerator Laboratory: the symbolic presentation for space charge fields
        In this work by saying Virtual Accelerator we mean a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators using distributed computing resources. The main use of the Virtual Accelerator is simulation of beam dynamics by different packages with the opportunity to match the and the possibility to create pipelines of tasks when the results of one processing step based on a particular software package can be sent to the input of another processing step. In the case of charged particle beams Virtual Accelerator is working as a prediction mechanism: witch analytical model should we use to exclude, may be partly , the negative effect in beam dynamics. With the help of external fields these changes can be done. To simulate the large number of particle we need distributed resources for our computations. In this paper different parallel techniques to simulate space charge effects are presented. In particular, the investigation of overall performance of the predictor-corrector method is made.
        Speaker: Ms Nataliia Kulabukhova (Saint Petersburg State University)
        Slides
    • Distributed Computing. GRID & Cloud computing
      Convener: Milos Lokajicek (Institute of Physics AS CR)
      • 65
        Grids and Clouds in the Czech Republic
        Speaker: Mr Jan Kundrát (Institute of Physics of the AS CR and CESNET)
        Slides
      • 66
        Scientific Computing Infrastructure and Services in Moldova
        In recent years distributed information processing and high-performance computing (HPC, distributed Cloud and Grid computing infrastructures) technologies for solving complex tasks with high demands of computing resources are actively developing. In Moldova the works on creation of high-performance and distributed computing infrastructures were started relatively recently due to participation in implementation of a number of international projects. Research teams from Moldova participated in the regional and pan-European projects, including initiatives focused on integration in pan-European e-Infrastructures. That allowed them beginning forming of national heterogeneous computing infrastructure, get access to regional and European computing resources, and expand the range and areas of solving tasks.
        Speaker: Mr Nichita Degteariov (RENAM)
      • 67
        Usage of cloud platform for the BY-NCPHEP Tier3 site
        Status of the NC PHEP BSU Tier 3 site presented. Transition to rackmounted servers started. Due to need in more scalable, reliable platform which provide efficient resource utilization tier infrastructure was ported on cloud with distributed storage. The choise and setup of cloud is discussed.
        Speaker: Mr Vitaly Yermolchyk (NC PHEP BSU)
        Slides
    • 14:45
      Coffee break
    • Distributed Computing. GRID & Cloud computing
      Convener: Viacheslav Ilyin (NRC Kurchatov Institute)
      • 68
        Dynamic federation of grid and cloud storage
        The Dynamic Federations project ("dynafed") enables the deployment of scalable, distributed storage systems composed of independent storage endpoints. While the Uniform Generic Redirector at the heart of the project is protocol agnostic, we have focussed our effort on HTTP-based protocols, including S3 and WebDAV. The system has been deployed on testbeds covering the majority of the ATLAS and LHCb data, and supports geography-aware replica selection. The work done exploits the federation potential of HTTP to build systems that offer uniform, scalable, catalogue-less access to the storage and metadata ensemble and the possibility of seamless integration of other compatible resources such as those from cloud providers. Dynafed can exploit the potential of the S3 delegation scheme, effectively federating on the fly any number of S3 buckets from different providers and applying a uniform authorization to them. This feature has been used to deploy in production the BOINC Data Bridge, which uses the Uniform Generic Redirector with S3 buckets to harmonize the BOINC authorization scheme with the Grid/X509. We believe that the features of a loosely coupled federation of open-protocol-based storage elements open many possibilities of smoothly evolving the current computing models and of supporting new scientific computing projects that rely on massive distribution of data and that would appreciate systems that can more easily be interfaced with commercial providers and can work natively with Web browsers and clients.
        Speaker: Furano Fabrizio (CERN IT/SDC)
        Slides
      • 69
        Cloud infrastructure at JINR
        To fulfill JINR commitments in different national and international projects related to modern information technologies usage such as cloud and grid computing as well as to provide the same tools for JINR users for their scientific research the cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was performed to tune JINR cloud installation to fit local needs: resources request via web form in the cloud web-interface, cloud utilization statistics, user authentication via kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it has been re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new separate cloud instance has been deployed in high-availability configuration with distributed network file system. As soon as testing and benchmarking phases will be successfully passed all users' virtual machines will be migrated from old cloud instance to the new one. ALso it's planned to add more cloud nodes soon.
        Speaker: Dr Nikolay Kutovskiy (JINR)
        Slides
      • 70
        Creating cloud storage system at JINR
        This article describe the questions of construction the distributed storage system and options for using it. As a means of creating such storage systems were studied Ceph (FS), GlusterFS, MooseFS, LizardFS. As a result of the analysis system was chosen which is currently used as a storage at JINR cloud service. It also provides options to access cloud storage and how to implement them.
        Speaker: Mr Roman Semenov (JINR)
        Slides
      • 71
        Optimization of over-provisioned clouds
        The variability of workloads experienced by modern user applications leads to uneven distribution of workloads across physical resources and ineffective hardware utilization of cloud data centers. Some ways to solve this problem are reviewed and a need to develop algorithms to optimize hardware utilization of clouds is shown. As an example of one of the promising approaches a smart algorithm for dynamic re-allocation and consolidation of virtual resources to improve hardware utilization in heterogeneous cloud environments is proposed.
        Speaker: Mr Nikita Balashov (JINR)
        Slides
      • 72
        BES-III distributed computing
        The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- anihilation in the energy range from 2.0 till 4.6 GeV. The world largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. Expected increase of the data volume in the coming years required significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes the design of BES-III distributed computing system, experience gained after 2 years of deployment and future plans of development.
        Speaker: Mr Igor Pelevanyuk (JINR)
        Slides
      • 73
        Parallel computing with BEAN - BES-III Analysis Framework
        BEAN is a lightweight ROOT-based analysis-only framework designed for BES-III experiment. A number of approaches to parallel computing are used in BEAN: batch systems, ROOT PROOF and Apache Hadoop. The latter is particularly interesting for particle physics applications being a new de facto standard in parallel computing. We present here the implementation details of PROOF and Hadoop support in BEAN, the relevant user experience as well as performance comparison.
        Speaker: Mr Evgeny Boger (JINR)
        Slides
      • 74
        Professional simulations of neutron spectrometers and experiments by VITESS software package
        At present days practically each new neutron spectrometer before construction or modernization is simulated, and its parameters are optimized with use of calculations on fast modern computers. In several leading world neutron centers development new and support of old program packages (MCSTAS, VITESS, RESTRAX, NISP) with use of a method of Monte Carlo is conducted. In FLNP modules for simulations of neutron spectrometers and virtual experiments for the VITESS program (Virtual Instrument Tool for European Spallation Source) are developed, tested and used. Development of nearly a half of all code (and according to modules) is over the last 10 years successfully complete VITESS in close cooperation with Juelich Centre of Neutron Science (FZ-Juelich Germany) in Munich. In particular, tasks of modeling of neutron instruments with the polarized neutrons are almost completely realized today. A simulation of various flippers and spin echo spectrometers with constants and time dependent magnetic fields is carried successfully out. Thus magnetic fields can be as model (are incorporated in the modules) and/or calculated by the external special software (for example MagNet, Ansys, etc.).
        Speaker: Dr Sergey Manoshin (FLNP JINR)
        Slides
      • 75
        Performing Track Reconstruction at the ALICE TPC using a Fast Hough Transform method
        The Hough Transform algorithm is a popular image analysis method that is widely used to perform global pattern recognition in images through the identification of local patterns in a suitably chosen parameter space. The algorithm can be also used to perform track reconstruction; to estimate the trajectory of individual particles when passed through the sensitive elements of a detector volume. This paper presents a fast reconstruction method for the Time Projection Chamber (TPC) of the ALICE experiment at LHC. The method, that combines a linear Hough Transform algorithm with a fast filling of the Hough Transform parameter space, is developed within ALICE O^2, the future computing framework of ALICE for RUN3.
        Speaker: Dr Charalampos Kouzinopoulos (CERN)
        Slides
      • 76
        Design of Web platform for science and engineering in the model of open market
        Speaker: Dr Alexander Kryukov (SINP MSU)
        Slides
    • Computations with Hybrid Systems (CPU, GPU, coprocessors)
      Convener: Prof. Gennady Ososkov (Joint Institute for Nuclear Research)
      • 77
        APPLICATION OF CLUSTER ANALYSIS AND AUTOREGRESSIVE NEURAL NETWORKS FOR THE NOISE DIAGNOSTICS OF THE IBR-2M REACTOR
        The pattern recognition methodologies and, artificial neural networks were used widely for the reactor noise diagnostics. It’s very important for pulsed reactor of periodic operation IBR-2M (Dubna, Russia), which is a high sensitivity to reactivity fluctuations (40 times higher than stationary reactors with a uranium fuel). The cluster analysis allows a detailed study of the structure and fast reactivity effects IBR-2M. Nonlinear autoregressive neural network with local feedback connection allows predicting slow reactivity effects. It is shown that the power noise is subsequently divided into four stable clusters, with three of which describe the noise transition region (three days). The fourth cluster constitutes a stable structure that until the end of reactor cycle (two weeks). The noise transition region is formed by asymptotically increases vibration of the moving reflectors in the process of their heating after maximum power is reached. The study of slow processes shows that the nonlinear autoregressive neural network with an error of ~5% allows predicting changes in reactivity caused by the fluctuation of liquid sodium flow rate up to two days of reactor operation.
        Speaker: Dr Yuri Pepelyshev (JINR)
        Slides
      • 78
        System of HPC content archiving
        This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document clustering depends on their contents. Main system features could be described with notions of distributed multilevel architecture, flexibility and interchangeability of components, achieved through the standard functionality incapsulation in independent executable modules.
        Speaker: Mr Andrei Ivashchenko (St.Petersburg State University)
        Slides
      • 79
        Impact of Configuration Management system of computer center on support of scientific projects throughout their lifecycle
        In this article the problem of support of scientific projects in the computer center is considered throughout their lifecycle and in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of computer center. In view of the strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of the support of research projects the influence on him from the Configuration Management system is reviewed and development of the corresponding elements of the system is described in the paper. The key article is to review the Configuration Management system of computer center as a central component of the set of proactive procedures in support of scientific projects and the maintenance of the infrastructure. This set of activities tied to the Configuration Management system, aims to prevent accidents and to maintain a stable level of services. Particular attention is paid to the resolution of specific requirements for the system, caused by the specifics of the computer center: the collective use of supercomputing resources and the use of solutions based on virtualization. In addition, the article describes the possible future development of the system.
        Speaker: Mr Nikolai Iuzhanin (SPbSU)
        Slides
      • 80
        Resource and task management tools for physics applications
        Efficient distribution of high performance computing resources according to actual application needs along with comfortable and transparent access to these resources has been an open question since HPC technologies became widely introduced. One of the application classes that require such functionality are physics applications. In this paper we discuss issues and approaches to manage resources for large-scale applications from physics and related fields, and describe tools to do this with special attention to virtualization technologies. We evaluate resource distribution and balancing methods applied to physics software packages, analyze the efficiency of our approach compared to traditional methods of HPC resource management, and highlight the concept of virtual private supercomputer - a virtual computing environment tailored specifically for a target user with particular target applications.
        Speaker: Mr Ivan Gankevich (Saint Petersburg State University)
        Slides
      • 81
        Social Data Collection and Processing Framework
        Modern information technologies have an impact in research in all possible areas of knowledge, and the humanities are not an exception. Some of them, such as psychology and sociology, can use observations of the human behavior and the opinions of individuals and communities as a base for research. One of the possible ways to acquire the data for the base is from social networking services, which in their present state may serve as a rich source of information about people. However, gathering, storage and especially processing of such data is a nontrivial task because of its large amount, which is only grows with time, complex and diverse structure, suitability of psychological and sociological methods for automatic application. This work outlines a framework for managing of large amounts of social data for applying psychological and sociological methods. Here it is described, how the framework handles gathering, storing, and processing complex, interconnecting data by using columnar database Hbase as a core storage. In addition, an example of the framework’s work with performance results is presented, future improvements are discussed.
        Speaker: Mr Dmitry Guschansky (St.Petersburg State University)
        Slides
      • 82
        Development of cross-platform communication library in C++, with support for multiple scripting languages: architectural pitfalls
        Speaker: Mr Oleg Iakushkin (Saint Petersburg State University)
        Slides
    • Computing for Large Scale Accelerator Facilities (LHC, FAIR, NICA, etc.) and Big Data
      Convener: Ms Julia Andreeva (CERN)
      • 83
        ALFA: Next generation concurrent framework for ALICE and FAIR experiments
        The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of a common software framework in an experiment independent way; ALFA (ALICE-FAIR framework). ALFA is designed for high quality parallel data processing and reconstruction on heterogeneous computing systems. It provides a data transport layer and the capability to coordinate multiple data processing components. ALFA is a flexible, elastic system which balances reliability and ease of development with performance by using a message based multi-processing in addition to multi-threading. The message- based approach allows different parts of the software to run on different hardware platforms (heterogeneous system). Moreover, each process in ALFA assumes limited communication and reliance on other processes. Such a design add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages. The status of the development and existing proto-types will be presented in this talk.
        Speaker: Dr Mohammad Al-Turany (GSI/CERN)
        Slides
      • 84
        Data analytics in the ATLAS Distributed Computing
        The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made using Kibana.
        Speaker: Dr Ilija Vukotic (University of Chicago)
        Slides
      • 85
        Integration Of PanDA Workload Management System With Supercomputers
        Abstract. The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 cores with a peak performance of 0.3 petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center “Kurchatov Institute”, IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
        Speaker: Dr Alexei Klimentov (Brookhaven National Lab)
        Slides
      • 86
        The Next Generation ATLAS Production System
        The data processing and simulation needs of the ATLAS experiment at LHC grow continuously, as more data are collected and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.
        Speaker: Mr Mikhail Borodin (NRNU MEPHI, NRC KI)
        Slides
    • 10:50
      Coffee break
    • Computing for Large Scale Accelerator Facilities (LHC, FAIR, NICA, etc.) and Big Data
      Convener: Prof. Dario Barberis (University and INFN Genova (Italy))
      • 87
        dCache, Sync-and-Share for Big Data
        The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two worlds have little overlap in user authentication and access protocols. While traditional storage technologies, popular in HEP, are based on X509, cloud services and sync-n- share software technologies are generally based on user/password authentication or mechanisms like SAML or Open ID Connect. Similarly, data access models offered by both are somewhat different, with sync-n-share services often using proprietary protocols. As both approaches are very attractive, dCache.org developed a hybrid system, providing the best of both worlds. To avoid reinvent the wheel, dCache.org decided to embed another Open Source project: OwnCloud. This offers the required modern access capabilities but does not support the managed data functionality needed for large capacity data storage. With this hybrid system, scientist can share files and synchronize their data with laptops or mobile devices as easy as with any other cloud storage service. On top of this, the same data can be accessed via established mechanisms, like GridFTP to serve the Globus Transfer Service or the WLCG FTS3 tool, or the data can be made available to worker nodes or HPC applications via a mounted filesystem. As dCache provides a flexible authentication module, the same user can access its storage via different authentication mechanisms; e.g., X.509 and SAML. Additionally, users can specify the desired quality of service or trigger media transitions as necessary, so tuning data access latency to the planned access profile. Such features are a natural consequence of using dCache.
        Speaker: Dr Patrick Fuhrmann (DESY)
        Slides
      • 88
        Complex for mega-science data modeling and processing
        The review of current status and the Program for future developments of data intensive high performance/high throughput computing complex for mega-science in NRC "Kurchatov Institute", supporting the Priority scientific task “Development of mathematical models, algorithms and software for systems with extramassive parallelism for pilot science and technical areas” is presented. Major upgrades to GRID, HPC and telco infrastructure and its integration under data-intensive computing paradigm are described. Keywords: high performance computing, high throughput computing, distributed storage systems, distributed computing, grid.
        Speaker: Eygene Ryabinkin (NRC "Kurchatov Institute")
        Slides
      • 89
        Big Data processing: test results
        Dealing with large volumes of data is tedious work which is often delegated to a computer, and more and more often this task is delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. Tests show that this middleware is well-suited for different types of workloads and its performance is comparable with well-known alternatives.
        Speaker: Prof. Alexander Degtyarev (Professor)
        Slides
    • 12:40
      LUNCH
    • Innovative IT Education with use of IT-technologies
      Convener: Mrs Evgenia Cheremisina (Dubna International University of Nature, Society and Man. Institute of system analysis and management)
      • 90
        Educational Project for the STAR Experiment at RHIC
        Modern education assumes significantly expand cooperation of universities with leading scientific centers for the training of highly qualified specialists. This report focuses on the MEhI and JINR joint project for the STAR experiment at RHIC (Brookhaven National Laboratory). The STAR experiment is one of the leading international collaboration in the field of modern nuclear physics. Many exiting discoveries, including new state of matter “Perfect liquid”, observation of antimatter helium-4 nucleus and strange antimatter, there were done. STAR is composed of 56 institutions and universities from 11 countries, with a total of 557 collaborators. Students of these universities are actively involved in all stages of the experiment, and use the results of this work in the preparation of master's and PhD theses. The goal of this project is to attract new students to this exciting research and the inclusion of the results of the STAR experiment in the educational process. Using educational materials from our web resource, students and teachers could get much interesting information about various modern experiments, facilities and international collaborations. There are following subsections: • General information about STAR; • Scientific Highlights; • STAR detectors subsystems (General Info, Technical info, Operating principle, Photos); • Instruments and methods for data analysis; • Trigger system; • Control Room; • Virtual labs based on real experimental data; • Online STAR data structure (STAR data structure, Online Meta-data Collection and Monitoring Framework for the STAR Experiment at RHIC, Remote online analysis); • Contact Information for Students. Most articles are visualized with interactive animations and 3D models that aimed at enhancing student’s understanding of educational material and engagement in the process of science. Currently a lab used experimental data on the gold-gold collisions at different energies from 7.7 to 200 GeV to study production of antiprotons at RHIC collider is realised. As instrument for the complex study of events it is developed the module to visualize events using interactive 3-D graphics. Furthermore it provides students from various universities of new educational resources appropriated to their level of education; allow to create community of students from different countries and universities interested in working on this area of science, will form a community of learners and teachers.
        Speaker: Prof. Yury Panebrattsev (JINR)
      • 91
        Transition to standard 3+ and optimization of the universities network in Russia
        This report provides an insight into the transition of the Russian higher education to standard 3+, peculiarities of the bachelor programs according to standard 3 and the formation of educational programs of the new standard. Special attention is paid to the optimization of the network of high schools on the basis of consolidation, building a network of Russian universities able to be on the list of the worldwide ranking universities. The features, advantages and disadvantages of a sharp consolidation of universities, reduction of the number of branches and private universities as well as the role of public and European accreditation for the legal recognition of Russian diplomas are considered too.
        Speaker: Dr Iurii Sakharov (Dubna International University for Nature,Society and Man)
        Slides
      • 92
        Hardware-Software Complex “Virtual Laboratory of Nuclear Fission” for LIS Experiment (Flerov Laboratory of Nuclear Reactions, JINR)
        One important aspect in the pedagogy of modern education is the integration of technological elements of modern science into the educational process. This integration has given rise to what has become to be referred to as blended learning. In this report we focus on the hardware-software complex “Virtual Laboratory of Nuclear Fission” as an example of incorporation of current scientific data into the educational process. This project uses experimental data of the Light Ion Spectrometer (LIS) from Flerov Laboratory of Nuclear Reactions, JINR. This project is a joint collaboration between MEPhI (Russia), JINR (Russia) and Stellenbosch University (South Africa). The physical process of spontaneous fission has been selected, and it was performed modeling and visualization of all the stages of experiment realization, taking of experimental data and their analysis. Virtual Lab includes training materials on the project, a set of interactive laboratory works, 3D models of detectors, interactive quizes and interactive job processing real experimental data from LIS setup using the platform ROOT. An important feature of this project is a combination of hands-on and virtual practicum approach. In this approach students work with real physical equipment and hardware-software complex “Virtual Laboratory of Nuclear Fission” to study the electronics and data acquisition of LIS experiment. This enables students to develop an understanding of principles of work and typical ways of exploitation of different blocks of electronics as training for their independent work with real physical equipment. At present we are developing new possibilities in the integration of the hardware and software elements of “Virtual Laboratory of Nuclear Fission”: 1. The Virtual Lab makes use of input signals from modern particles detectors, digitized using modern CAEN 5GHz digitizer. 2. It provides students with new possibilities of developing and their own components, described as physical processes as signals from detectors. These possibilities allow the Complex to be an open system for various student exercises. 3. Student labs practicum supplementing this Complex with real physical equipment form an essential and critical aspect in the training and development of young experimental scientists The project is comprised of three educational levels: • Elementary level. At this level the objective is to stimulate interest amongst the students, to give students an idea about the studied phenomena and to expose students to some of the most important aspects in modern nuclear physics. A typical target group at this level is school leavers, undergraduate students and student practice. • Post graduate level. The goal at this level is to study interactively typical nuclear detectors, different blocks of nuclear electronics and the most important methods of experimental data processing. • Professional level. It’s designed for PhD students at the final level of training before independent work and as a training for the successful work in the scientific collaborations. Software environment at this level provides the functions of the simulator. Usage scenarios have to implement the quintessence of the experts experience actually working in the experiments. In the framework of this project some international student practicums on experimental nuclear physics were held. Students studied principles of exploitation of various electronics blocks, worked with different types oscilloscopes (old analogues and modern digital), observed and studied signals from different detectors, made some calibrations and analyzed experimental data.
        Speakers: Ms Ksenia Klygina (JINR), Ms Victoria Belaga (JINR), Prof. Yury Panebrattsev (JINR)
        Slides
      • 93
        Web-based Builder of Digital Educational Resources
        These days there’s a lot of media material available on the internet for educators that include papers and lectures for a wide range courses and educational programs. But if one wishes to use some new interesting multimedia resources in a classroom, it takes a lot of time to find good quality pedagogical resources corresponding that might one’s own needs and requirements. The second problem is to integrate different interactive resources in joint presentation. The goal of the web-based builder of digital educational resources for educators is to provide capabilities for solutions to a wide range of science education tasks using a set of web-services and a library of media objects. There are basic set of digital educational resources related to modern science and technology and these include pictures and photos, videos and animations, 3D models, interactive models and interactive schemes, tests and simulators and virtual practicum in the web-based builder. For ease of use these web based resources for educators contain web-service for work with media resources, web-service for building own lectures and presentations and web-service for building own tests.
        Speaker: Ms Ksenia Klygina (JINR)
        Slides
      • 94
        E-learning as a Technological Tool to Meet the Requirements of Professional Standards in Training of IT Specialists
        We discuss issues of updating educational program according to requirements of labor market and the professional standards of the IT industry. We suggest the technology of E-learning through open educational resource to provide the participation of employers in the development of educational content and the intensification of practical training.
        Speaker: Olga Tyatyushkina (Dubna Univeristy)
        Slides
      • 95
        Adaptive educational environment in the IT field of study reacting on changes in the labor market
        The article describes modern approaches of creating educational environments, describes main technologies for their creating and development and gives examples of projects in this area, both in Russia and abroad. Identification and formalization of the needs of participants in the educational process were done and a concept of an adaptive educational environment in the IT field of study reacting on changes in the labor market is proposed. Structural and functional models of the system were built. Basing on the principle of adaptability of an educational environment the authors developed methods and scenarios of dynamic change in the composition of educational courses and programs in response to changing external requirements and the requirements of participants in the educational process. According to the algorithm the popularity of technologies is determined by analysis of text in the big amount of vacancy announcements. Analytical service using this algorithm is proposed to be included in the structure of the adaptive educational environment of the university or faculty.
        Speaker: Mr Yury Samoylenko (Dubna University)
        Slides
      • 96
        Virtualization in Education - Information Security lab
        The growing demand for qualified IT experts raises serious challenges for the education and training of young professionals who would answer scientific, industrial and social problems of tomorrow. Virtualization has a great impact on education allowing to increase its efficiency, to cut costs and to expand student audience abstracting users from physical characteristics of computing resources. Simulation and quick deployment of custom environments and even entire networks with possibility to put theoretical concepts straight away in real-world practice opens new doors for learning experience both in the classroom and remotely. With the advances of computing technologies something unthinkable 10 years ago is becoming a commodity and a standard in information technology education. After a short overview of some general virtualization principles the virtual labs as a study tool will be examined for two cases: Information Security training and learning Big Data related tools.
        Speaker: Dr Alexander Karlov (JINR)
        Slides
      • 97
        Virtual Computer Laboratory 2.0. 3D Graphics as Service. Methodological aspects of the use in research and education.
        The authors observe the practice of implementing the Virtual Computer Laboratory in Dubna International University for Nature, Society and Man. New generation of the virtual computer laboratory has introduced game-changing technology to make virtualization of professional 3D graphics applications easy to deliver and meet the performance expectations of students studying for designers and engineers. Provide a high level of education for IT specialists is significant for stable social development within intensive changes in social economic, scientific and technical fields. University graduates with degrees in IT are demanded by innovative enterprises in SEZ (Special Economic Zone) Dubna, local engineering companies and the businesses located at nearby Moscow. In order to improve the quality of professional training the educational institution must catch up with the progress introducing innovations into the educational process and e-learning systems based on advanced information technologies. For that purposes, the Virtual Computer Laboratory 2.0 based on a cloud computing technology was developed and successfully implemented in Dubna International University for Nature, Society and Man. Creating the University private cloud based on VMware technologies, IBM/Lenovo Blade-servers and Supermicro server platforms with GRID of Nvidia Kepler video graphic cards allows implementing latest operating systems and a wide range of software (with 3D graphics support now) and deliver parallel computing platform and programming model invented by Nvidia. It enables dramatic increases in computing performance by harnessing the power of the graphics-processing unit (GPU) for wide range of scientific analysis, modeling, graphical representation tasks and engineering development. The solution also accelerates virtual desktops and applications, allowing us to deliver true graphics from the datacenter to any user on the network to achieve improved productivity and mobility for all of our students, postgraduates, teachers and etc. The Virtual Computer Laboratory also introduces access to all critical applications, including the most 3D-intensive, highly responsive, rich multimedia experiences access from anywhere, on any device. The practice of virtual computer laboratory use has demonstrated that this efficient instrument can and must become a part of the modern innovative education to form up professional competences of graduates so that they are able to successfully resolve tasks in accordance with the profession being obtained.
        Speaker: Nadezhda Tokareva (Dubna Univeristy)
        Slides
    • Workload Management Systems in Applied Research and BigData
      Convener: Dr Alexei Klimentov (Brookhaven National Lab)
      • 98
        NEW TECHNOLOGIES OF 2-D & 3-D MODELING FOR ANALYSIS AND MANAGEMENT OF NATURAL RESOURCES
        For ensuring technological support of research and administrative activity in the sphere of environmental management was developed a specialized modular program complex. Its components provide realization of three main stages of any similar project: - effective management of data and constructing of information and analytical systems of various complexity; -complex analytical processing of spatial information and solution of expected and diagnostic tasks; -operational data visualization and results of researches among the Internet, creation of the situational centers of support of administrative decisions. The special attention in developing a program complex is focused to creation of convenient and effective tools for creation and visualization 2d and 3D models providing the solution of tasks of the analysis and management of natural resources.
        Speaker: Mrs Evgenia Cheremisina (Dubna International University of Nature, Society and Man. State Scientific Centre «VNIIgeosystem».)
        Slides
      • 99
        Tier-1 in Kurchatov Institute: first months of operations during Run-2
        An overview of Tier-1 operations during the beginning of LHC Run-2 will be presented. We will talk about three supported experiments, ALICE, ATLAS and LHCb: current status of resources and computing support, challenges, problems and solutions. Also we will give an overview of the wide-area networking situation and integration of our Tier-1 with regional Tier-2 centers.
        Speaker: Eygene Ryabinkin (NRC "Kurchatov Institute")
        Slides
      • 100
        JINR TIER-1 Centre for the CMS Experiment at LHC
        An overview of the JINR Tier-1 centre for the CMS experiment at the LHC is given. A special emphasis is placed on the main tasks and services of the CMS Tier-1 at JINR. In February 2015 the JINR CMS Tier-1 resources were increased to the level that was outlined in JINR's rollout plan: CPU 2400 cores (28800 HEP-Spec06), 2.4 PB disks, and 5.0 PB tapes. The first results of Tier-1 operations during the beginning of LHC Run-2 will be presented.
        Speaker: Dr Tatiana Strizh (JINR)
        Slides
      • 101
        Status of RDMS CMS Computing
        The Compact Muon Solenoid (CMS) is a high-performance general-purpose detector at the Large Hadron Collider (LHC) at CERN. Russia and Dubna Member States (RDMS) CMS collaboration was founded in the 1994 year. More than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) are involved in Russia and Dubna Member States (RDMS) CMS Collaboration. The RDMS CMS takes an active part in the CMS experiment. A proper computing grid-infrastructure has been constructed at the RDMS institutes for the participation in the running phase of the CMS experiment. Current status of RDMS CMS computing after the CMS setup upgrade and the LHC restart is presented.
        Speaker: Dr Elena Tikhonenko (JINR)
        Slides
      • 102
        Simulation Loop between CAD systems, Geant4 and GeoModel: Implementation and Results
        Data_vs_MonteCarlo discrepancy is one of the most important field of investigation for ATLAS simulation studies. There are several reasons of above mentioned discrepancies but primary interest is falling on geometry studies and investigation of how geometry descriptions of detector in simulation adequately representing “as-built” descriptions. Shapes consistency and detalization is not important while adequateness of volumes and weights of detector components are essential for tracking. There are 2 main reasons of faults of geometry descriptions in simulation: 1/ Inconsistency to “as-built” geometry descriptions; 2/Internal inaccurateness of transactions added by simulation packages itself. Georgian Engineering team developed hub on the base of CATIA platform and several tools enabling to read in CATIA different descriptions used by simulation packages, like XML/Persint->CATIA; IV/VP1->CATIA; GeoModel->CATIA; Geant4->CATIA. As a result it becomes possible to compare different descriptions with each other using full power of CATIA and investigate both classes of reasons of faults of geometry descriptions. Paper represents results of investigation of quality of geometry transactions doing by simulation packages and case studies of ATLAS Coils, End-Cap toroid and Bid Wheel structures.
        Speakers: Prof. Alexander SHARMAZANASHVILI (Georgian Technical University), Mr Niko Tsutskiridze (Georgian Technical University)
        Slides
      • 103
        LGD cluster LNP as a basic platform for tasks of the ATLAS Experiment
        This report describes the actions that can be useful for system administrators to quickly enter or replacing hardware cluster. The report includes general knowledge and specific examples for better understanding. In particular, work with IPMI (Intelligent Platform Management Interface), remote configure Worker Node with SSH, work with DHCP. Method of copy and deployment Worker Node images is included too. This report include step-by-step guide with all commands and all functions, which will be helpful for administrators. All the works were carried out on a cluster Laboratory of Nuclear Problems (LNP).
        Speaker: Ivan Bednyakov (JINR)
        Slides
      • 104
        Efficient Data Management Tools for the Heterogeneous Big Data Warehouse
        The traditional relational databases (aka RDBMS) having been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse-like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. We have evaluated new approaches of handling vast amount of data. In particular, we have studied a new class of technologies commonly referred to as non-relational (NoSQL) databases. This includes schema-less approaches via key-value stores, like HBase, Cassandra, MongoDB. We studied performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. The detailed studies and comparison make this project successful for different heterogeneous systems. This paper presents technologies and architectures we have studied, as well as the description of the back-end application that implements data uploading from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be used for data analytics in the further.
        Speaker: Ms Victoriya Osipova (Tomsk Polytechnic University, Tomsk, Russia)
        Slides
      • 105
        The development of hybrid metadata storage for PanDA Workload Management System
        Scientific computing in a field of High Energy and Nuclear Physics (HENP) produces vast volumes of data. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, daily runs up to 1.5 M jobs and submit them using PanDA workload management system. For tracking the execution of computational and analytical tasks PanDA uses monitor application, which contains a set of summary tables, charts and graphs, aggregating data from the central SQL-based metadata storage (RDBMS Oracle). The growth rate of the volume of stored information has increased significantly over the last few years: from 500 hundreds completed jobs per day in 2011 up to 2 million during LHC Run 1 (2012-2013). Present metadata storage technology significantly limits the analytical tasks performance. This research work is focused on the development of a Hybrid Metadata Storage Framework (HMSF) that would improve scalability and performance of PanDA metadata store. In this framework, the scalability issue is addressed by integrating relational database and NoSQL data store, which combines the strengths of both. We have developed a prototype of HMSF that provides data transfer and synchronization between parts of hybrid storage, with Cassandra as NoSQL backend. HMSF have an API providing interface, which interprets requests from external applications. PanDA monitor was partly adopted to interact with HMSF. The operational data queries are forwarded to the primary SQL-based repository and the analytic data requests are processed by NoSQL database, which stores prepared query-specific data structures. The performance and scalability tests of HMSF-adopted part of PanDA monitor shows that data aggregation and precalculation in advance, with the help of HMSF synchronization mechanisms, provide significant performance improvement without adding much complexity to the resulting system.
        Speaker: Ms Maria Grigorieva (National Research Center “Kurchatov Institute”)
        Slides
    • 16:10
      Coffee break
    • ROUND TABLE: (in Russian)

      "Russian research, scientific and educational centers coherent and consolidated efforts in computing research and software development for mega-science projects in HENP and other compute-intensive sciences in Russia”

      Conveners: Dr Alexei Klimentov (Brookhaven National Lab), Dr Mikhail Korotkov (INNOPRAKTIKA), Dr Tadeusz Kurtyka (CERN), Dr Vladimir Korenkov (JINR)
    • Workload Management Systems in Applied Research and BigData
      Convener: Dr Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
      • 106
        PanDA for COMPASS at JINR
        PanDA (Production and Distributed Analysis System) is a workload management system, widely used for data processing at experiments on Large Hadron Collider (LHC) and others. COMPASS is a high-energy physics experiment at the Super Proton Synchrotron (SPS). Data processing for COMPASS historically runs locally at CERN, on lxbatch, the data itself stored in CASTOR. In 2014 an idea to start running COMPASS production through PanDA arose. Such transformation in experiment's data processing will allow COMPASS community to use not only CERN resources, but also Grid resources worldwide. During the spring and summer of 2015 this work is being performed at JINR. Details and results of this process will be presented in this report.
        Speaker: Mr Artem Petrosyan (JINR)
        Slides
      • 107
        Use of the Hadoop structured storage tools for the ATLAS EventIndex event catalogue.
        The ATLAS experiment collects billions of events per year of data-taking, and processes them to make them available for physics analysis in several different formats. An even larger amount of events is in addition simulated according to physics and detector models and then reconstructed and analysed to be compared to real events. The EventIndex is a catalogue of all events in each production stage; it includes for each event a few identification parameters, some basic non-mutable information coming from the online system, and the references to the files that contain the event in each format (plus the internal pointers to the event within each file for quick retrieval). Each EventIndex record is logically simple but the system has to hold many tens of billions of records, all equally important. The Hadoop technology was selected at the start of the EventIndex project development in 2012 and proved to be robust and flexible to accommodate this kind of information; both the insertion times and query response times are acceptable for the continuous and automatic operation that started in spring 2015. This talk will describe the EventIndex data input and organisation in Hadoop and explain the operational challenges that were overcome in order to achieve the expected good performance.
        Speaker: Dr Andrea Favareto (University and INFN Genova (Italy))
        Slides
      • 108
        Configuration management at CERN
        The CERN IT Department provides configuration management services to LHC experiments and to the department itself for more than 17,000 physical and virtual machines in two data centres. The services are based on open-source technologies such as Puppet and Foreman. The presentation will give an overview of the current deployment, the issues observed during the last years, the solutions adopted, and the challenges for the future.
        Speaker: Mr Ignacio Barrientos Arias (CERN)
        Slides
      • 109
        Collaboration and decision making tools for mobile groups
        Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerate development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence, realization of special infrastructures on mobile platforms with help of ad-hoc wireless local networks could eliminate hardware-attachment and be useful also in terms of scientific approach. From basic internet-messengers to complex software for online collaboration tools in large-scale workgroups we show the implementations of tools based on mobile infrastructures. Despite growth of mobile infrastructures, applied distributed solutions in group decision-making and e-collaboration are not common. In this article we propose software complex for real-time collaboration and decision-making based on mobile devices to increase mobility and improve employment of current solutions.
        Speakers: Mr Serob Balyan (Saint-Petersburg State University), Mr Suren Abrahamyan (Saint-Petersburg State University)
        Slides
      • 110
        Mathematical modeling of heterogeneous distributed data storages
        The work reviews a development of mathematical solution for modeling heterogeneous distributed data storages. There is a review of different approaches of modeling (Monte-Carlo, agent-based modeling). Performance analysis of systems based on commercial solutions of Oracle and freeware solutions (Cassandra, Hadoop) is provided. It's assumed that developed tool will help optimize data distribution between nodes and eliminate problem parts of complex system.
        Speaker: Valeriy Parubets (National Research Tomsk Polytechnic University)
        Slides
    • Closing