SCIENCE BRINGS NATIONS TOGETHER
Symposium on Nuclear Electronics and Computing - NEC'2019

Europe/Podgorica
Conference Hall (Montenegro, Budva, Becici)

Conference Hall

Montenegro, Budva, Becici

Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
Description
Welcome to NEC’2019!

On 30 September – 4 October, 2019, Montenegro (Budva) will host the regular JINR 27th Symposium on Nuclear Electronics and Computing - NEC'2019. The symposia have been held since 1963.

For the ninth time the organizers of the Symposium are JINR and CERN. The Symposium attendees will be leading specialists in the fields of advanced computing and network technologies, distributed computing as well as GRID and cloud computing and nuclear electronics.

All previous forums of this series were highly appreciated by leading specialists and companies involved.

The organizers of the NEC symposia traditionally paid particular attention to young scientists and specialists. The previous NEC conferences attracted an impressive number of such attendees, which reached 35% of the total number of participants.

In 2011, 2013, 2015 and 2017, in the frames of the symposium, student schools on advanced information technologies were organized, each having been attended by almost 80 students from different countries. In 2019, the tradition is continued.

Chairpersons
Vladimir Korenkov, JINR
Ian Bird, CERN

Book of Abstracts
Participants
  • Aleksandr Malakhov
  • Aleksey Kurilkin
  • Alena Kuznetsova
  • Alexander AVRORIN
  • Alexander Bychkov
  • Alexander Degtyarev
  • Alexander Golunov
  • Alexander Kryukov
  • Alexander Malko
  • Alexander Moskovsky
  • Alexander Uzhinskiy
  • Alexander Wagner
  • Alexandr Mikula
  • Alexey Altynov
  • Alexey Anisenkov
  • Alexey Sokolov
  • Alexey Stadnik
  • Alexey Voinov
  • Alexey Vorontsov
  • Anastasiia Kaida
  • Andrea Valassi
  • Andrey Baginyan
  • Andrey Churakov
  • Andrey Dolbilov
  • Andrey Kiryanov
  • Andrey Kotov
  • Andrey Nechaevskiy
  • Andrey Polyakov
  • Andrey Semin
  • Andrey Sheshukov
  • andrey shevel
  • Andrey Yudin
  • Andrey Zarochentsev
  • Anna Fatkina
  • Anna Maksymchuk
  • Anton Sevastianov
  • Anton Teslyuk
  • Antonin Opichal
  • Antonio Policicchio
  • Artem Petrosyan
  • Boris Sharkov
  • Boris Starchenko
  • Carlo Battilana
  • Christopher Kullenberg
  • Danila Oleynik
  • Daria Priakhina
  • Dario Barberis
  • Darya Stankus
  • Denis Korablev
  • Dmitrii Dementev
  • Dmitrii Monakhov
  • Dmitrii Ponomarev
  • Dmitriy Maximov
  • Dmitry Eliseev
  • Dmitry Garanov
  • Dmitry Kulyabov
  • Dmitry Podgainy
  • DUKE OEBA
  • Egor Shchavelev
  • Egor Zadeba
  • Ekaterina Streletskaya
  • Elena Kirpicheva
  • Elena Kolesnikova
  • Elena Litvinenko
  • Elena Popova
  • Elena Rusakovich
  • Elena Yasinovskaya
  • Eugene Goryachkin
  • Evgenia Cheremisina
  • Evgeniia Kuteinikova
  • Evgeny Kuzin
  • EVGENY TUSHOV
  • Eygene Ryabinkin
  • Fabrizio Ferro
  • Fedor Prokoshin
  • Gennady Ososkov
  • Gleb Marmuzov
  • Grigory Krasov
  • Haibo Li
  • Hale Sert
  • Hristo Nazlev
  • Igor Golutvin
  • Igor Pelevanyuk
  • Igor Semenushkin
  • Ilya Shirikov
  • Irina Enyagina
  • Irina Filozova
  • Irina Titkova
  • Isabelle De Bruyn
  • Ivan Bedniakov
  • Ivan Kadochnikov
  • Ivan Sapozhkov
  • Ivan Vankov
  • Konstantin Androsov
  • Konstantin Dobrosolets
  • Konstantin Gertsenberger
  • Kseniia Klygina
  • Leo Schlattauer
  • Leonid Sevastianov
  • LIGANG XIA
  • Lukas Mizisin
  • Margarita Stepanova
  • Marina Golosova
  • Martina Ressegotti
  • Maxim Zuev
  • Michele Faucci Giannelli
  • Mikhail Belov
  • Mikhail Itkis
  • Mikhail Serdyukov
  • Mikhail Shitenkov
  • Mikhail Titov
  • Milos Lokajicek
  • Nadezhda Shchegoleva
  • Nadezhda Tokareva
  • Natalia Nikishina
  • Nataliia Kulabukhova
  • Nataliya Vorontsova
  • Nelli Pukhaeva
  • Nikita Balashov
  • Nikita Stepanov
  • Nikolay Gorbunov
  • Nikolay Kutovskiy
  • Nikolay Mester
  • Nikolay Voytishin
  • Nikolay Zernin
  • Nikolina Ilic
  • Oksana Kreider
  • Oksana Streltsova
  • Oleg Iakushkin
  • Oleg Rogachevskiy
  • Oleg Samoylov
  • Oleg Strekalovsky
  • Olga Rumyantseva
  • Olga Sedova
  • OLGA TARANTINA
  • Pavel Goncharov
  • Pavel Kohout
  • Pavel Lavrenko
  • Petr Fedchenkov
  • Petr Vokac
  • Petr Zrelov
  • Rafal Bielski
  • Ran Du
  • Roumyana Hadjiiska
  • Rozaliia Matveeva
  • Sergei Afanasiev
  • Sergei Nemnyugin
  • Sergey Belov
  • Sergey Pavlov
  • Sergey Sergeev
  • Sergey Sidorchuk
  • Sergey Sobolev
  • Sheng Sen Sun
  • Snezhana Potemkina
  • Stanislav Pakulyak
  • Stepan Vereschagin
  • Tatiana Strizh
  • Tatiana Tyupikova
  • TATIANA ZAIKINA
  • Temur Enik
  • Timofei Galkin
  • Vadim Bednyakov
  • Vadim Kochetov
  • Valentin Ustinov
  • Valery Mitsyn
  • Vasilii Shvetsov
  • Vasiliy Velikhov
  • Vera Inkina
  • Veronika Zabanova
  • VIACHESLAV ILIIN
  • Victor Barashko
  • Victor Matveev
  • Victor Zhiltsov
  • Victoria Belaga
  • Viktor Kotliar
  • Viktor Krylov
  • Vito Palladino
  • Vladimir Drozdov
  • Vladimir Elkin
  • Vladimir Karjavine
  • Vladimir Khalin
  • Vladimir Korenkov
  • Vladislav Vorobyev
  • Wainer Vandelli
  • William Phukungoane
  • Yann Donon
  • Yaroslav Tarasov
  • Yelena Mazhitova
  • Yulia Ivanova
  • Yuri Butenko
  • Yuri Minaev
  • Yuri Sakharov
  • Zurab Modebadze
Support
    • 08:30 09:30
      Registration Splendid Conference & SPA Resort, 3th floor

      Splendid Conference & SPA Resort, 3th floor

    • 09:30 10:20
      Welcome speeches Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Dr Vladimir Korenkov (JINR)
      • 09:30
        Welcome from Montenegro officials 10m
      • 09:40
        Welcome from Organizing Committee 10m
      • 09:50
        Welcome from Sponsors 30m
    • 10:20 11:20
      Plenary Splendid Conference & SPA Resort, Conference Hall Petrovica

      Splendid Conference & SPA Resort, Conference Hall Petrovica

      Convener: Dr Vladimir Korenkov (JINR)
      • 10:20
        Scientific Program of JINR 30m
        Speaker: Prof. Victor Matveev (JINR)
        Slides
      • 10:50
        Modern Heavy Ion Accelerator Facilities in JINR and Worldwide 30m
        Speaker: Prof. Boris Sharkov (JINR)
        Slides
    • 11:20 11:40
      Coffee break 20m Conference Bar

      Conference Bar

    • 11:40 13:40
      Plenary Splendid Conference & SPA Resort,Conference Hall Petroviċa

      Splendid Conference & SPA Resort,Conference Hall Petroviċa

      Convener: Dr Tadeusz Kurtyka (CERN)
      • 11:40
        IT and LIT– strategy of development 30m
        Information technologies are one of the key drivers of the development strategy and progress in scientific studies. Their development contributes to enhancing the quality of research, speeding up getting results and new scientific knowledge, effective management, emerging novel forms of the education system, improving communication and interaction between scientists and providing access to a wide range of information. The fast evolution of IT, from the one hand, and its rapid obsolescence, on the other, stimulate demand for new research and development. The development of supercomputer calculations (HPC), clouds, networks, novel architectures and principles of organizing computations entail the transformation of software and infrastructure solutions leading to innovative changes in the strategy of scientific studies. A concept for the development of IT-technologies and scientific computing can be formulated; it is aimed at providing the solution of strategic problems facing JINR through the introduction and development of a whole range of novel IT-solutions integrated into a unified information and computing environment, i.e. a scientific IT-ecosystem combining a great number of various technological solutions, concepts and principles.
        Speaker: Dr Vladimir Korenkov (JINR)
        Slides
      • 12:10
        CERN – JINR Collaboration: Past, Present and Future 30m
        Speaker: Dr Tadeusz Kurtyka (CERN)
        Slides
      • 12:40
        NICA project at JINR 30m
        The project NICA (Nuclotron-based Ion Collider fAcility) is aimed to study a hot and dense baryonic matter in heavy-ion collisions in the energy range up to $\sqrt{s}_{NN} = 11.0$ GeV . The NICA accelerators complex includes an upgrade of the existing superconducting synchrotron ``Nuclotron'' and construction of the new injection sources, supercondacting booster, and supercondacting collider rings with two interaction points (IP). The heavy-ion collision program will be performed with the fixed target experiment Baryonic Matter at Nuclotron (BM@N) at the beam extracted from the Nuclotron, and with Multi-Purpose Detector (MPD) at the first IP of NICA Collider. Investigation of nucleon spin structure and polarization phenomena is foreseen with the Spin Physics Detector (SPD) at the second IP of the Collider. The BM@N experiment will work with light nuclei to study particle production at kinetic energy up to 4 GevA. The Multi-Purpose Detector (MPD) will investigate heavy ion collisions at the NICA collider in the energy range $\sqrt{s_{NN}} = 4 - 11$ GeV. The MPD physics purpose is to get a better understanding of the QCD matter under extreme conditions of high baryonic density by studying collective phenomena like $\Lambda$ polarization, dilepton yields, multi-strange hyperons and hypernuclei production. The MPD construction is progressing in accordance with the schedule. The Spin Physics Detector will explore polarized protons and deutrons to study of spin and polarization dependent effects in hadron-hadron collisions.
        Speaker: Dr Oleg Rogachevskiy (JINR)
        Slides
      • 13:10
        Monitoring and Accounting for the Distributed Computing System of the ATLAS Experiment 30m
        ATLAS developed over the years a large number of monitoring and accounting tools for distributed computing applications. In advance of the increased experiment data rates and monitoring data volumes foreseen for LHC Run 3 starting in 2012, a new infrastructure has been provided by the CERN-IT Monit group, based on InfluxDB as the data store and Grafana as the display environment. ATLAS is adapting and further developing its monitoring tools to use this infrastructure for data and workflow management monitoring and accounting dashboards, expanding the range of previous possibilities with the aim of achieving a single, simpler, environment for all monitoring applications. This presentation will describe the tools used, the data flows for monitoring and accounting, the problems encountered and the solutions found.
        Speaker: Prof. Dario Barberis (University and INFN Genova (Italy))
        Slides
    • 13:40 15:00
      LUNCH 1h 20m
    • 15:00 16:00
      Plenary Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Prof. Aleksandr Malakhov (JINR)
      • 15:00
        Precision Luminosity Measurement with the CMS detector for HL-LHC 30m
        The High Luminosity upgrade of the LHC (HL-LHC) is foreseen to increase the instantaneous luminosity by a factor of five to seven times the LHC nominal design value. The resulting, unprecedented requirements for background monitoring and luminosity measurement create the need for new high-precision instrumentation at CMS, using radiation hard detector technologies. This contribution presents the strategy for bunch-by-bunch online luminosity measurement based on various detector technologies. A main component of the system is the Tracker Endcap Pixel Detector (TEPX) with an additional 75 kHz of dedicated triggers for online measurement of luminosity and beam-induced background. Real-time implementations of algorithms such as pixel cluster counting on an FPGA are explored for online processing of the resulting data. The potential of the exploitation of the Outer Tracker, the Hadron Forward calorimeter and muon trigger objects will also be discussed.
        Speaker: Elena Popova (CERN)
        Slides
      • 15:30
        Toward the borders of nuclear stability: exploration of exotic nuclei in FLNR 30m
        Nuclear forces are capable of keeping together only a certain number of protons and neutrons forming nuclei situated in the chart of nuclides within an area confined to the so called borders of nuclear stability. For the time being these boundaries are known up to Z=13 for nuclei with neutron excess and Z=32 for proton-rich nuclei. The search for stability borders for heavier nuclei and exploration of properties of nuclides far from the β-stability line are the most actual task for modern nuclear physics. A number of experimental approaches have been worked out in the past few decades to reveal unusual properties of short-lived nuclei oversaturated with excess neutrons or protons. One of the main trends in nuclear science now consists in construction of radioactive ion beam “factories”, but apart from this further advance demands elaborate tools and effective techniques. This report will present some aspects of the scientific program of the Flerov Laboratory in this field, including the ideas of forthcoming experiments and techniques required for their implementation.
        Speaker: Dr Sergey Sidorchuk (FLNR JINR)
    • 16:00 16:30
      Coffee break 30m Conference Bar

      Conference Bar

    • 16:30 19:00
      Plenary Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Dr Andrea Valassi (CERN)
      • 16:30
        Making Supercomputers Smart: the Moscow State University Experience 30m
        Just after the computer era has been started, the Research Computing Center of the Moscow State University was equipped with the most modern computing hardware. These days RCC MSU still operates large scale supercomputers including Lomonosov and Lomonosov-2. Supercomputers are open for research and education society supporting hundreds of projects. The huge number of hardware and software components and parameters together with the complexity of architectures implemented arise an extremely important question - how efficiently the supercomputers are used? The efficiency study requires deep monitoring and analysis of all processes withing supercomputers. To improve the efficiency, a set of tools and techniques should be created to make a quick and automated decisions in all sides of supercomputer functioning. The talk will share RCC MSU experience in supercomputers productivity improving by use of smart software and analytical techniques.
        Speaker: Mr Sergey Sobolev (RCC MSU)
        Slides
      • 17:00
        MC2E: Meta-Cloud Computing Environment for HPC 30m
        Modern practical research in physics, chemistry and biology has shifted in the area of simulations, experimental results’ processing and data mining, thus imposing immense demands on computational resources. The problem is that due to the heterogeneous nature such resources may have a high variance in their load. So users may wait for weeks until their job is done, even though there is plenty of resources available on other platforms. Such problem arises because various platforms may have significantly different APIs and when researchers used to work with one interface it’s often expensive to fit their software to work with some other interface. In this research we present MC2E - an environment for academic multidisciplinary research. MC2E aggregates heterogeneous resources such as private/public clouds, HPC clusters and supercomputers under a unified easy-to-use interface.
        Speaker: Ruslan Smeliansky (Lomonosov Moscow State University)
        Slides
      • 17:30
        RDIG-M – Russian Data Intensive Grid for Megascience 30m
        in progress
        Speaker: Dr Vasily Velikhov (NRC "Kurchatov Institute")
      • 18:00
        IBM POWER9. Summit - what doing most powerful supercomputer 20m
        in progress
        Speaker: Alexey Perevozchikov (NIAGARA COMPUTERS)
      • 18:20
        CISCO Solutions for Compute, Storage and Networking Infrastructure 20m
        Speaker: Evgeny LOGUNTSOV (CISCO)
      • 18:40
        Convergence is a new HPC paradigm 20m
        Speaker: Andrey Semin (Intel)
    • 20:30 22:30
      Welcome Party (Drinks&Buffet) 2h Splendid Conference & SPA Resort, Conference Bar

      Splendid Conference & SPA Resort, Conference Bar

    • 09:00 11:00
      Plenary Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Prof. Ivan Vankov (Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences)
      • 09:00
        Distributed control and monitoring tools at LU-20 and HILAC complexes 30m
        The TANGO control system is chosen as the main platform for developing control software at the Nuclotron. The experimental setup of the TANGO system was successfully tested during the runs of the existing accelerator complex. The report describes hardware, server and client software modules for data acquisition and equipment management at LU-20 and HILAC linear accelerators. Universal web clients were developed for management of equipment groups. The data is transferred is a single stream for each group of equipment. The client layer interacts with TANGO control system via standard http and WebSocket protocols. It allows to significantly expand the choice of programming language for writing the client software. The TANGO device server WebSocketDS was developed for data exchange via WebSocket protocol. Various JavaScript libraries and frameworks were used for the client layer development, such as Angular, ReactJS, ExtJs and few others. They allow to create cross-platform client web applications for the control systems. JavaScript framework Electron was used for creating standard desktop applications.
        Speaker: Mr Vladimir Elkin (JINR)
        Slides
      • 09:30
        ATLAS Trigger and Data Acquisition Upgrades for the High Luminosity LHC 30m
        The ATLAS experiment at CERN has started the construction of upgrades for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5x10^34 cm^-2s^-1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The approved baseline design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 5.2 TB/s input, and an Event Filter running at 1 MHz which combines offline-like algorithms on a large commodity compute service augmented by hardware tracking. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and track reconstruction with Associative Memory ASICs and FPGAs are designed to combat pileup in the Trigger and Event Filter respectively. This paper will report recent progress on the design, technology and construction of the system. The physics motivation and expected performance will be shown for key physics processes.
        Speaker: Mr Wainer Vandelli (CERN)
        Slides
      • 10:00
        CMS High Level Trigger performance in Run 2 30m
        The CMS experiment selects events with a two-level trigger system, the Level-1 (L1) trigger and the High Level trigger (HLT). The HLT is a farm of approximately 30K CPU cores that reduces the rate from 100 kHz to about 1 kHz. The HLT has access to the full detector readout and runs a streamlined version of the offline event reconstruction. In Run 2 the peak instantaneous luminosity reached values above $2 \times 10^{34}$ cm$^{-2}$ sec$^{-1}$, posing a challenge to the online event selection. An overview of the object reconstruction and trigger selections used in the 2016-2018 data-taking period will be presented. The performance of the main trigger paths and the lessons learned will be summarized, also in view of the coming Run 3.
        Speaker: Hale Sert (RWTH Aachen University)
        Slides
      • 10:30
        Detector performance and stability of the CMS RPC system during Run-2 30m
        The CMS (Compact Muon Solenoid) experiment, at the Large Hadron Collider (LHC) in CERN explores three different gaseous detector technologies in order to measure and trigger muons: Cathode Strip Chambers (in the forward regions), Drift Tubes (in the central region), and Resistive Plate Chambers (both its central and forward regions). The CMS RPC system provides information to all muon track finders and thus ensure the robustness and redundancy to the first level of muon triggering. Different approaches have been used to monitor the detector stability during the Run-2 data taking. The summary of the CMS RPC detector performance will be presented in terms of the main detector parameters – efficiency and cluster size, including the background measurements as well.
        Speaker: Dr Roumyana Hadjiiska (Bulgarian Academy of Sciences - INRNE)
        Slides
    • 09:00 11:00
      Round Table: Discussing Practical Aspects of Implementing CISCO Technologies: Server, Storage, Networking, Private and Hybrid Cloud Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

    • 11:00 11:30
      Coffee break 30m Conference Bar

      Conference Bar

    • 11:30 13:30
      Plenary Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Prof. Dario Barberis (University and INFN Genova (Italy))
      • 11:30
        EI3 – The ATLAS EventIndex for LHC Run 3 30m
        The ATLAS Event Index provides since 2015 a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the overlaps of trigger chains and derivation datasets. LHC Run 3 will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This talk describes a new implementation of the front and back-end services that will be able to provide at least the same functionality as the current one for increased data ingestion and search rates and with increasing volumes of stored data. It is based on a set of HBase tables, with schemas derived from the current Oracle implementation, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, allowing us to re-use most of the existing code for metadata integration.
        Speaker: Dr Fedor Prokoshin (JINR)
        Slides
      • 12:00
        Electronics upgrade for the CMS CSC muon system at the High Luminosity LHC 30m
        Cathode strip chambers (CSCs) are used to detect muons in the endcap region of the CMS detector. The High Luminosity LHC will present particular challenges to the electronics that read out the CSCs: the demands of increased particle flux, longer trigger latency, and increased trigger rate require upgraded electronics boards in the forward region. In particular, both the anode and cathode readout electronics will include full digitization at the beam crossing rate and data pipelining in deep digital FIFOs that provide nearly deadtimeless operation and the capability to accommodate long latency requirements without loss of data. High speed optical links will be used to increase the bandwidth for data between the on-chamber electronics and the back end. Motivated by experience with the present electronics and the expectations for high radiation conditions during operation of the HL-LHC, some novel features are incorporated into this second-generation electronics design to provide increased robustness. Radiation tolerant optical transceivers are used in all on-chamber applications. Due to low reliability of EPROMs after radiation doses in the range of 10 kRad, an alternate capability for programming the FPGAs is included via an asynchronous optical link that can complete the programming with comparable or better speed than the EPROM. We present the novel features of these electronics along with results from test stands and initial commissioning in CMS with cosmic rays.
        Speaker: Isabelle De Bruyn (University of Wisconsin - Madison)
        Slides
      • 12:30
        Case of Cloud Designing and Development 30m
        The designing and development of a computing clouds is complex process where numerous factors have to be taken into account. For example, size of planned cloud and potential growth, hardware/software platforms, flexible architecture, security, ease of maintenance. Computing cloud is quite often consisted of several data centers (DC). The DC is considered to be a group of hardware and/or virtual servers which is dedicated to run the user virtual machines (VM) and/or storage servers. Each pair of DCs may be interconnected by one or more virtual data transfer links. To manage such cloud to form “Infrastructure as a Service” (IaaS) a distributed operating management system (DOMS) is needed. The proposed architecture for DOMS is a set of software agents. Important advantages of such approach are flexibility, horizontal scalability, independent development/maintenance of any agent. The specially developed protocol to send and receive requests betwen agents is also discussed. Due to geographical distribution, the requirements for system stability in terms of hardware and software malfunctions are high. Proposed DOMS architecture is addressed operating stability as well. Observation of prototype consisting of several DCs in ~100Km distance from each other and practical results were presented. Potential application fields where this development might be used is also discussed.
        Speaker: Mr andrey shevel (PNPI, ITMO)
        Slides
      • 13:00
        BM@N experiment for studies of baryonic matter at the Nuclotron 30m
        The first experiment at the accelerator complex of NICA-Nuclotron BM@N (Baryonic Matter at Nuclotron) is aimed to study interactions of relativistic heavy ion beams with fixed targets. Relativistic heavy ion collisions provide the unique opportunity to investigate the properties of nuclear matter at ultra-high density and temperature. The Nuclotron heavy ion beam energy range is well suited for studies of strange mesons and multi-strange hyperons which are produced in nucleus-nucleus collisions close to the kinematic threshold. The measurements will be carried out at the BM@N experimental setup, located at the extracted beam of the Nuclotron. The BM@N setup, status of the detector upgrade for data taking with the relativistic heavy ion beams and the experimental program are presented.
        Speaker: Mrs Anna Maksymchuk (JINR)
        Slides
    • 11:30 13:30
      Round Table: Discussing Practical Aspects of Implementing CISCO Technologies: Server, Storage, Networking, Private and Hybrid Cloud Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

    • 13:30 15:00
      LUNCH 1h 30m
    • 15:00 16:00
      Detector & Nuclear Electronics Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Dr Roumyana Hadjiiska (Bulgarian Academy of Sciences - INRNE)
      • 15:00
        Detector performance of the CMS Precision Proton Spectrometer during LHC Run2 and its upgrades for run3 15m
        The CMS Precision Proton Spectrometer (PPS) consists of silicon tracking stations as well as timing detectors to measure both the position and direction of protons and their time-of-flight with high precision. Special devices called Roman Pots are used to insert the detectors inside the LHC beam pipe to allow the detection of scattered protons close to the beam itself. They are located at around 200 m from the interaction point in the very forward region on both sides of the CMS experiment. The tracking system consists of 3D pixel silicon detectors while the timing system is made of Diamond pixel detectors and Ultra Fast Silicon Detectors. PPS has taken data at high luminosity while fully integrated to the CMS experiment. The total data collected correspond to around 100 /fb during the LHC Run2. In this presentation, the PPS detector operation, commissioning and performance are discussed, as well as the upgrades foreseen for Run3.
        Speaker: Fabrizio Ferro (INFN Genova)
        Slides
      • 15:15
        Results of the Radiation Dose Study around the GEM Muon Detectors at CMS 15m
        GEM (Gas Electron Multiplier) detectors are developed to measure the muon flux at the future HL (High Luminosity) LHC. A radiation monitoring system to control the dose absorbed by these detectors during the tests was designed. The system uses a basic detector unit, called RADMON. There are in each unit two types of sensors: RadFETs, measuring the total absorbed dose of all radiations and p-i-n diodes – for particle (proton and neutron) radiations. The system has a modular structure, permitting to increase easily the number of controlled RADMONs – one module controls up to 12 RADMONs. For the first test, a group of 3 GEM chambers called supermodule was installed at the inner CMS endcap at March 2017. One RADMON was placed inside of this supermodule for dose monitoring and through local system controller, it transfers the measured data to the test experiment data acquisition system. The real dose data, registered for this long period are now processing and the more important results will be presented to the NEC 2019.
        Speaker: Prof. Ivan Vankov (Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences)
        Slides
      • 15:30
        GEM detectors for the Upgrade of the CMS Muon Forward system 15m
        The CMS experiment is one of the two general purpose experiments at the LHC pp collider. For LHC Phase-2, the instantaneous luminosity delivered to the experiment will reach 5 × 1034 cm−2s−1, resulting in high particle fluxes that requires the detectors to be upgraded. The forward regions, corresponding to the endcaps of the detectors, are the most affected parts. In the CMS experiment, to cope with the higher event rates and larger radiation doses, triple-layer Gas Electron Multipliers (GEM) will be installed in the muon endcaps. Triple-GEM chambers will complement the existing Cathode Strip Chambers, leading to a better identification of the muon tracks and a reduction of the trigger rate due to the suppression of fake candidates. In addition, the forward coverage will be further extended. For the first ring of the muon endcaps, 144 GEM chambers are being built in production sites spread in 7 countries around the world. For the first time, such detectors will have large sizes of the order of 1 m2, thus high requirements on the uniformity across the detector are needed. Before the final installation in the CMS detector, to test their integrity, quality and performance, the GEM chambers undergo multiple quality control tests. This talk gives an introduction to GEM detectors and presents results of the performance tests.
        Speaker: Dr Martina Ressegotti (University & INFN Pavia, Italy)
        Slides
      • 15:45
        CONTROL AND MONITORING OF BOOSTER INFLECTOR PLATES POWER SYSTEM 15m
        The superconducting synchrotron Booster is a part of NICA collider injection complex. The Booster injection system will consist of 3 pairs of inflector plates to provide different schemes of heavy ion injection: single, multiple and multiturn. The report presents main principles, parameters, realization and test results of the inflector plates power supply system for the Booster beam injection. The core of the system is realized on National Instruments CompactRIO platform consisting of cRIO-9068 chassis with analog, digital input and output modules and CAN interface module. The control and monitoring software is based on TANGO controls system and consists of FPGA firmware, few TANGO device servers running on cRIO controller and graphical operator’s interface applications.
        Speaker: Hristo Nazlev (Petkov)
    • 15:00 16:00
      Distributed Computing. GRID & Cloud computing Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr Alexander Kryukov (SINP MSU)
      • 15:00
        Filling the Storage Performance Gap: Storage on-demand for Data-Intensive Workloads in Composable/Disaggergated Paradigm 20m
        Speaker: Ivan Sapozhkov (RSC)
      • 15:20
        CISCO Solutions for Private and Hybrid Clouds 20m
        Speaker: CISCO
      • 15:40
        Grid at JINR 15m
        The JINR grid infrastructure is represented by the Tier1 center for the CMS experiment at the LHC and the Tier2 center. The grid center resources of the JINR are part of the global grid infrastructure WLCG (Worldwide LHC Computing Grid), developed for LHC experiments. JINR LIT actively participates in the WLCG global project. The work on the use of the grid infrastructure within the WLCG project is carried out in cooperation with the collaborations such as CMS, ATLAS, Alice and major international centers, which operate as Tier1 centers of the CMS experiment (CH-CERN, DE-KIT, ES-PIC, FR-CCIN2P3, IT-INFN-CNAF, US-FNAL-CMS) and as Tier2 grid centers located in more than 170 computing centers of 42 countries worldwide. Since the beginning of 2015, a full-scale WLCG Tier1 site for the CMS experiment at the LHC has been operating in JINR LIT. The CMS Tier1 center at JINR has demonstrated stable work through the entire period since its launch into full operation and takes second place in its performance in the world Tier1 sites for CMS. The Tier2 center supports a whole number of virtual organizations particularly Alice, ATLAS, CMS, LHCb, BES, BIOMED, СOMPASS, MPD, NOvA, STAR and others.
        Speaker: Dr Tatiana Strizh (JINR)
        Slides
    • 16:00 16:30
      Coffee break 30m Conference Bar

      Conference Bar

    • 16:30 19:00
      Detector & Nuclear Electronics Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Dr Sergey Sidorchuk (FLNR JINR)
      • 16:30
        Simulation of spectra of cylindrical neutron counters using the GEANT-4 package 15m
        It is commonly supposed that the amplitude spectrum of the helium proportional counter at irradiation by thermal and cold neutrons has a peak of full absorption with the energy of 768 keV and two small “shelves”, caused by boundary effects from falling of charged particles (of proton or tritium nucleus) in the detector wall. Simulation of the amplitude spectra of cylindrical counters with different gas filling is presented in the paper. The possibility of the third peak, not coinciding with that of full absorption, is shown, while the peak position depends on the ratio of the path length to the counter diameter. The results obtained may be of interest in the development of low efficiency neutron detectors and neutron monitors.
        Speaker: Mr Andrey Churakov (FLNP JINR)
      • 16:45
        Measurement of basic static characteristics (I-V, C-V) of silicon detectors 15m
        The use of microstrip detectors in creating coordinate track systems for HEP experiments with high geometric efficiency (~100%), a large number of strips (measuring channels) over 10^6 and accuracy a/√12 (a-pitch)requires careful preliminary selection of detectors by main parameters. The main static parameters of silicon microstrip detectors include the following:I-V characteristic determines the amount of dark leakage current of a silicon detector.C-V characteristic allows you to define the full depletion voltage and the value of the capacitance of both the strip and detector. Modern systems for testing and selection of microstrip detectors make it possible in the possible in the automated mode to identify strips with high dark currents, possible short circuits and breaks in interstrip metallization.
        Speaker: Ms Catherine Streletskaya (JINR)
        Slides
      • 17:00
        Application of modern commercial digitizers for new approaches to neutron's detection 15m
        Modern commercially available digitizers provide for a moderate price new detection approaches (pulse shape discrimination (PSD), pulse height analysis, etc.) in nuclear and particle physics. In particular, such new electronics became highly demanded for neutron's detection. One of a new detection methods is to use PSD technique for new lithium containing scintillators for effective discrimination between neutron- and gamma- events. As we found, high level of intrinsic alpha- background of these scintillators still does not allows to use such detectors in low background experiments. The actual work presented fundamentally new neutron detection method which is combination of modern digitizers with well-known NaI detectors. Method based on delayed coincidences in deexcitation of iodine-128 which is result of neutron capture on iodine-127. Sensitivity of the method has been investigated with several different digitizers.
        Speaker: Mr Dmitrii Ponomarev (DLNP)
        Slides
      • 17:15
        Front-End Electronics for BM@N STS, characterization and quality assurance 15m
        Data acquisition system (DAQ) for the Silicon Tracking System (STS) of BM@N (Dubna,Russia) experiment is described. The system will be based on double-sided microstrip silicon sensors of CBM type and will be commissioned in 2022. DAQ system of BM@N STS will operate in a data-driven mode with a high throughput bandwidth (up to 300 Gb/s) in radiation hard environment and will transmit data from more than 600 000 channels. Results of the characterisation of the Front-end electronics are presented. The key component of the Front-end board (FEB) is STS/MUCH-XYTER ASIC. Test results of the analog and digital part of the ASIC are presented. Also results of the in-beam tests of the front-end electronics are presented. Assembly of the first STS modules was already started at Joint Institute for Nuclear Research (JINR). STS modules consist of double-sided microstrip sensor, set of aluminum signal micro-cables and FEBs. Quality assurance system for the bonding quality control during the assembly was developed.
        Speaker: Mr Mikhail Shitenkov (JINR)
        Slides
      • 17:30
        Experiments with GABRIELA detector system 15m
        For several years, on SHELS (Separator for Heavy ELements Spectroscopy) was carried out more dozen experiments, aimed to investigation of characteristics of heavy elements and discover new isotopes. Perfect data acquisition system GABRIELA consists of a 10x10 cm2 DSSSD, 128x128 strips and 8 plats a 6x5cm2 DSSD, 32x32 strips. It detects 70% alpha particles and 90% gamma-quanta, by spontaneous fission, and also accurately to separate events by time (1μs). Complex out 5 coaxial Ge-detectors has good efficiency of gamma-quanta registration. The mixing of α- decay with γ- and β-decay spectroscopy allows to investigate single particle states behavior, as well as the structure of little known elements in the Z = 100-104 and N = 152-162 region.
        Speaker: Mrs Alena Kuznetsova (JINR)
        Slides
      • 17:45
        Calculation of efficiency of cylindrical thermal neutron counter assemblies 15m
        Cylindrical proportional counter assemblies are the main tool for observing neutron fluxes on many spectrometers. Optimization of the geometric parameters of the assemblies is of interest from the point of view of increasing the homogeneity of efficiency and simplifying the design of the detector system. Calculation of the efficiency of different variants of assembly designs consisting of 4 or 5 Helium-4-1 type counters has been carried out in the paper. The GEANT-4 package has been used to simulate the operation of the modules designed to replace the old counters of the spectrometer NERA. The calculation results have been compared with the experimental ones.
        Speaker: Dr Aleksey Kurilkin (JINR)
        Slides
      • 18:00
        New read-out electronics for the Drift-Tube Chambers of CMS 15m
        The Drift Tubes (DT) system is the key detector in the region of the CMS barrel dedicated to the measurement of muon tracks. The signals from about 172000 DT cells must be fast and synchronously acquired to deliver the information about the hits. In the context of increasing the luminosity of the LHC in preparation for the Phase-II the DT system is being upgraded. The main focus of this upgrade is the development of a new generation of read-out electronics based on the FPGA technology. The new on-chamber electronics will provide higher acquisition rates, radiation resistance and flexibility of the trigger settings for the DT system. The DT-chambers will be equipped, depending on chamber type, with 3 to 5 single type boards called OBDTs (On Board electronics for Drift Tubes). Along with better read-out characteristics, the OBDTs ensure less intermediate elements in the read-out chain. The talk presents an overview of the OBDT architecture. Special attention will be given to the explanation of the FPGA firmware structure and functionality.
        Speaker: Dr Dmitry Eliseev (RWTH Aachen University)
        Slides
      • 18:15
        Improvements in the NOvA Detector Simulation based on JINR stand measurements 15m
        NOvA is a long-baseline neutrino experiment aiming to study neutrino oscillation phenomenon in the muon neutrino beam from complex NuMI at Fermilab (USA). Two identical detectors have been built to measure the initial neutrino flux spectra at the near site and the oscillated one at a 810 km distance, which significantly reduces many systematic uncertainties. To improve electron neutrino and neutral current interaction separation, the detector is constructed as a finely segmented structure filled with liquid scintillator. Charged particles lose their energy in the detector materials, producing light signal in a cell which are recorded by readout electronics. The simulation models this using the following chain: a parameterized front-end simulation converts all energy deposits in active material into scintillation light, the scintillation light is transported through an optical fiber to an avalanche photodiode, and the readout electronics simulation models the shaping, digitization, and triggering on the response of the photodiode. Two test stands have been built in JINR (Dubna, Russia) to measure the proton light response of NOvA scintillator and the electronic signal shaping of the NOvA front-end-board. The parameters measured using these test stands have been implemented in the custom NOvA simulation chain.
        Speaker: Oleg Samoylov (JINR)
        Slides
      • 18:30
        Design of the front-end electronics based on multichannel IDEAS ASICs for silicon and GEM detectors 15m
        IDEAS ASICs are designed for the front-end readout of ionizing radiation detectors and produced by commercial fabless IC supplier – Integrated Detector Electronics AS (Norway). IDEAS ASIC is a multichannel (32/ 64/ 128) chips. Each chip channel has pre-amplifiers, shaper and multiplexed analogue readout. It’s necessary to configure internal chip registers, control analogue readout and transmit data from each measuring channel to DAQ System. These are basic functions of Control Unit based on FPGA. Design of the front-end electronics for silicon and GEM detectors consists of IDEAS IC, ADC and Control Unit. Current FEE BM@N configuration (March 2018) is based on IDEAS ASICs for Forward Silicon Detector, GEM detectors and CSC. According to upgrade plans for BM@N FEE for Si beam tracker, Si beam profiler, Forward Silicon Tracking Detectors also will be based on the same ASICs. This paper presents the design of the front-end electronics of the BM@N Si beam profiler: - Double-Sided Silicon Detectors – a coordinate plane with 2x128 measuring channels; - IDEAS ASICs – the front-end readout of DSSD; - Analog Devices ADC; - FPGA Xilinx – Control Unit.
        Speaker: Ms Yulia Ivanova (VBLHEP JINR)
        Slides
      • 18:45
        Intel FPGA products for data processing in complex computing environment 15m
        Speaker: Mr Konstantin Dobrosolets (Intel FPGAs)
    • 16:30 18:45
      Distributed Computing. GRID & Cloud computing Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr Tatiana Strizh (JINR)
      • 16:30
        Present status and main directions of the JINR cloud development 15m
        The JINR cloud grows not only in terms of the amount of resources, but also in the number of activities it is used for, namely, COMPASS production system services, a data management system of the UNECE ICP Vegetation, a service for disease detection of agricultural crops through the use of advanced machine learning approaches, a service for scientific and engineering computations, a service for data visualization based on Grafana, the jupyterhub head and execute nodes for it, gitlab and its runners as well as some others. Apart from that, there was a successful attempt to deploy in the JINR cloud a virtual machine with a GPU card passed through from the server for developing and running machine and deep learning algorithms for the JUNO experiment. Moreover, the JINR distributed information and computing environment joining resources from the JINR Member State organizations with the help of the DIRAC grid interware began to be used for running BM@N and MPD experiment jobs. The software distribution on these remote resources was done with the help of the CernVM File System. All these topics are covered in detail.
        Speaker: Dr Nikolay Kutovskiy (JINR)
        Slides
      • 16:45
        Improving Resource Usage in HPC Clouds 15m
        HPC-as-a-service is a new cloud paradigm that represents easy to access and use cloud environments for High Performance Computing (HPC). This paradigm has been receiving a lot of attention from the research community lately since it represents a good tradeoff between computational power and usability. One of the key drawbacks associated with HPC clouds is low CPU usage due to the network communication overhead [2, 3]. Instances of HPC applications may reside on different physical machines separated by significant network latencies. Network communications between such instances may consume significant time and thus result in CPU stalls. In this paper we propose the scheduling algorithm that overcomes such drawbacks to increase the HPC task capacity in the Ethernet-based HPC cloud by sharing CPU cores between different VMs. The algorithm observes parallel tasks’ behavior and packs tasks with low CPU usage on same CPU cores. We fully implemented and evaluated our algorithm on 15 popular MPI benchmarks/libraries. The experiments have shown that we can significantly improve the CPU usage with negligible performance degradation.
        Speaker: Andrey Chupakhin (Lomonosov Moscow State University)
        Slides
      • 17:00
        Cloud integration within DIRAC Interware 15m
        Computing clouds are widely used by many organizations in science, business, and industry. They provide flexible access to various physical computing resources. In addition computing clouds allows better resource utilization. Today, many scientific organizations have their own private cloud used for both: hosting services and performing computations. In many cases, private clouds are not 100% loaded and could be used as work nodes for distributed computations. If connect clouds together it would be possible to use them together for computational tasks. So the idea to integrate several private clouds appeared. Cloud bursting approach may be used for the integration of resources. But in order to provide access to the united cloud for all participants, the extensive configuration would be required on all clouds. We studied the possibility to unite clouds by integrating them using distributed workload management system – DIRAC Interware. Two approaches for virtual machines spawning were evaluated: usage of OCCI interface and OpenNebula RPC interface. They were tested and both approaches allowed to complete computing jobs on various clouds based on OpenNebula interware.
        Speaker: Mr Igor Pelevanyuk (JINR)
        Slides
      • 17:15
        Accelerating personal computations with HTCondor: generating large numbers of events with GENIE 15m
        GENIE is one of the most popular MC neutrino event generators, widely used in essentially all neutrino accelerator experiments (e.g. NOvA, MINERVA). The tasks related to the development and optimization of the generator itself require creating a large number of events in the shortest possible time, to reduce the overall development time. The usage of large-scale distributed computing infrastructures, such as Grid, does not guarantee minimal execution time due to the possibly long queue times. At the same time the power of a modern PC is not capable of making such computations in a reasonable amount time. In this work we give an example of a hybrid approach: accelerating computations by using a personal computing device in conjunction with a general-purpose batch-system based on HTCondor.
        Speaker: Mr Nikita Balashov (JINR)
        Slides
      • 17:30
        Optimizing resource usage with HTCondor 15m
        HTCondor is a very flexible job management system, but for site administrators it is not always easy to come with optimal configuration to fulfill local policies and requirements. Everybody would expect that normal job execution follow fairshare configuration and recent resource usage, but with few additional quite natural requirements like minimum idle resources it can pretty fast become difficult to achieve all these very simple goals together. HTCondor batch system by design maximize utilization of all resources and this approach works fine for jobs with same resource requirements (CPU, memory, ...). With a mixture of smaller and bigger jobs available resources can be sufficient to start only the small job while big job will wait idle almost indefinetely. Condor can runs DEFRAG daemon to consolidate fragmented resources, but this often lead to unnecessary high number of idle resources. Several different approaches exists to optimize draining and consolidation of the fragmented resources, but most of them focus just on grid multicore jobs. We think this is unnecessary restriction especially for our local users and that lead to the implementation of our own mechanism for machine draining to support arbitrary multi-dimensional resource requirements while trying to minimize draining time and optimize resource utilization. Details about our draining mechanism will be presented and compared with of other solutions.
        Speaker: Mr Petr Vokac (Institute of Physics of the Czech Academy of Sciences)
        Slides
      • 17:45
        Data processing and analysis for Baikal-GVD 15m
        Baikal-GVD is a deep underwater gigaton-volume neutrino telescope currently under construction in Lake Baikal. The detector is a spatially distributed lattice of photomultipliers, designed to register Cherenkov radiation from the products of neutrino interactions with the water of the lake. When the trigger conditions are met, digitized photomultiplier waveforms are sent the shore, allowing for the reconstruction of energy and direction of the neutrino. We describe a data processing and analysis infrastructure that has been developed for the detector.
        Speaker: Mr Alexander Avrorin (INR RAS)
      • 18:00
        Distributed data processing of COMPASS experiment 15m
        Implementation of COMPASS data processing in the distributed environment has started in 2015. Since the summer of 2017, data processing system works in production mode, distributing jobs to two traditional Grid sites: CERN and JINR. There are two storage elements, both at CERN: disk-storage EOS for short-term storage and tape-storage Castor for long-term storage. Processing management services, including MySQL server, PanDA servers, APF/Harvester server, monitoring server, and production management server, are deployed in JINR Cloud Service. Thus, the system, which manages distributed data processing of the experiment, is also distributed. The production management system is based on the principles of service-oriented architecture. Each service of the system is maximally isolated from the others, is executed independently, and usually performs only one function, for example: sends jobs, checks their statuses, archives results, and so on. During last year, the system was replenished by task archiving mechanism, FTS and Harvester services, and Monte Carlo processing chain. Status, statistics, workflow, data management, and infrastructure overview are presented in this report.
        Speaker: Mr Artem Petrosyan (JINR)
        Slides
      • 18:15
        Multifunctional platform and mobile application for plant disease detection 15m
        Crop losses are the major threat to the wellbeing of rural families, to the economy and governments, and to food security worldwide. We present a multifunctional platform for plant disease detection (PDDP). PDDP consists of a set of interconnected services and tools developed, deployed, and hosted with the help of the JINR cloud infrastructure. PDDP was designed using modern organization and deep learning technologies to provide a new level of service to the farmer’s community. The mobile application allowing users to send photos and text descriptions of sick plants and get the cause of the illness and treatment is part of PDDP. We collected a special database of the grape, wheat and corn leaves consisting of fifteen sets of images. We have tried different neural network architecture on this data and select the best one. The architecture and basic principles of the platform and networks are described and compared with other well-known solutions. We will show web-portal and mobile app and the way different types of users can work with them. Keywords: siamese networks, convolutional neural networks, deep learning, plant disease detection The reported study was funded by RFBR according to the two research projects № № 18-07-00829.
        Speaker: Dr Alexander Uzhinskiy (JINR)
        Slides
      • 18:30
        Management of the environmental monitoring data: UNECE ICP Vegetation 15m
        Air pollution has a significant negative impact on various components of ecosystems, human health and ultimately causes significant economic damage. Air pollution is the fourth-largest threat to human health, behind high blood pressure, dietary risks and smoking. The aim of the UNECE International Cooperative Program (ICP) Vegetation in the framework of the United Nations Convention on Long-Range Transboundary Air Pollution (CLRTAP) is to identify the main polluted areas of Europe, produce regional maps and further develop the understanding of long-range transboundary pollution. The program is realized in 39 countries of Europe and Asia. Mosses are collected at thousands of sites. We will describe our approach to managing the ICP Vegetation environmental data. We will present the Data Management System (DMS) of the UNECE ICP Vegetation consisting of a set of interconnected services and tools deployed and hosted at the Joint Institute for Nuclear Research (JINR) cloud infrastructure. DMS provides its participants with a modern unified system of collecting, analysis and processing of biological monitoring data and should facilitate IT-aspects of all biological monitoring stages starting from a choice of sampling sites and finishing with the generation of pollution maps of a particular area or state-of-environment forecast in the long term. We will present an architecture of the DMS and show how to work with it. Keywords: environmental monitoring, data management, cloud platform, intellectual data processing, UNECE ICP Vegetation, air pollution, mosses, heavy metals, neural networks.
        Speaker: Dr Alexander Uzhinskiy (Dr.)
        Slides
    • 09:00 11:00
      Plenary Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Milos Lokajicek (Institute of Physics AS CR)
      • 09:00
        Third-Party-Copy transfer alternatives to GridFTP 30m
        The "Third Party Copy" (TPC) is crucial mechanism necessary to build distributed storage systems with efficient data transfers. TPC allows client to initiate direct transfer from one storage endpoint to the other party and majority of these transfers are currently done with GridFTP protocol. Uncertain future of Globus Toolkit which provides commonly used GridFTP implementation and new approaches for authorization mechanisms lead to the demand to look for viable alternatives. This effort to bring new alternative protocols supporting TPC is organized as a part of WLCG "Data Organization, Management and Access" (DOMA) group. Storage software commonly used within grid deployments support mainly WebDAV and XRootD protocols and they were both included to be evaluated for TPC support. This include documentation of the on-wire protocol, development on storage software side, functional and interoperability tests for different implementations and last but not least salability tests with production storage endpoints provided mostly by sites / VOs involved in distributed LHC data storage infrastructure. We will provide overview of current status for non-GridFTP TPC support in major storage implementation as well as status of their adoption and future plans. Currently it is also necessary to do development on the side of storage data management systems to provide proper support for multiple TPC protocols within one distributed grid storage systems. All this effort can make GridFTP protocol optional in the future once majority of storage endpoints support alternative protocol and especially with WebDAV this can make much easier integration of the industry standard storage solutions.
        Speaker: Mr Petr Vokac (Institute of Physics of the Czech Academy of Sciences)
        Slides
      • 09:30
        PIK Data Centre status update 30m
        In the framework of the PIK nuclear reactor reconstruction project, a PIK Data Centre was commissioned in 2017. While the main purpose of the Centre is storage and processing of PIK experiments data, its capacity is also used by other scientific groups at PNPI and outside for solving problems in different areas of science such as computational biology and condensed matter physics. PIK Data Centre is an integral part of computing facilities of NRC "Kurchatov Institute". The PIK Computing Centre has a heterogeneous structure and consists of several types of computing nodes suitable for a wide range of tasks and two independent data storage systems, all of which are interconnected with a fast InfiniBand network. The engineering infrastructure provides redundant main power and two independent UPS installations for computing equipment and for cooling system. In this talk we will highlight the results and challenges of one year and a half of successful operation.
        Speaker: Mr Andrey Kiryanov (PNPI)
        Slides
      • 10:00
        Multifunctional Information and Computing Complex of JINR: status and perspectives 30m
        The implementation of the MICC (Multifunctional Information and Computing Complex) project in 2017-2019 laid foundation for its further development and evolution taking into account new requirements to the computing infrastructure for JINR scientific research. The rapid development of information technologies and new user requirements stimulate the development of all MICC components and platforms. Multi-functionality, high reliability and availability in a 24x7 mode, scalability and high performance, a reliable data storage system, information security and a customized software environment for different user groups are the main requirements, which the MICC should meet as a modern scientific computing complex. The JINR MICC consisting of four key components - the grid infrastructure, the central computing complex, the computing cloud and the HybriLIT high-performance platform which includes the ‘Govorun’ supercomputer, ensures the implementation of a whole range of competitive research conducted at the world level at JINR in experiments: MPD, BM@N, Alice, ATLAS, CMS, NOvA, BESIII, STAR, COMPASS and others. The MICC includes theTier1 grid center which is the only one in the JINR Member States and one of the 7 world data storage and processing centers of the CMS experiment (CERN). The JINR Tier1 and Tier2 grid sites are elements of the global grid infrastructure used in the WLCG project for processing data from the LHC experiments and other grid applications.
        Speaker: Dr Tatiana Strizh (JINR)
        Slides
      • 10:30
        New fast simulation in ATLAS 30m
        The ATLAS physics programme relies on very large samples of simulated events. Most of these samples are produced with GEANT4 which provides a detailed simulation of the ATLAS detector. However, this simulation is time and CPU consuming and the available resources will not allow to keep up the MC production with the luminosity increase foreseen by the LHC. To solve this problem, fast simulation tools are needed to replace the Geant4 base simulation. Unfortunately, the current fast simulation tools used in ATLAS are not accurate enough to be used by all analyses. Hence, the ATLAS collaboration is developing new fast calorimeter simulation tools (FastCaloSim) which use machine learning techniques, such as principal component analysis and deep neural networks. Prototypes for both approaches are being tested and validated; the new FastCaloSim showing a significant improvement in the description of cluster level variables in electromagnetic and hadronic showers over existing tools while the Deep Learning approaches are a promising R&D. To complement the new FastCaloSim, ATLAS is developing Fast Chain. This provides fast tools for the simulation of the rest of the ATLAS detector and the digitisation and reconstruction of the events. By combining these tools, ATLAS aims to have the capabilities to simulate the required numbers of events to achieve its physics goals. In this talk, we will describe the new FastCaloSim tool, new deep learning prototypes as well as the status of the ATLAS Fast Chain.
        Speaker: Dr Michele Faucci Giannelli (University of Edinburgh)
        Slides
    • 09:00 11:00
      Round Table: NIAGARA Splendid Conference & SPA Resort, Conference Hall Crnojeviċa

      Splendid Conference & SPA Resort, Conference Hall Crnojeviċa

      • 09:00
        Data Processing and Storage for Black Hole Event Horizon Imaging 2h
        Speaker: Martin Galle (Niagara)
    • 11:00 11:30
      Coffee break 30m Conference Bar

      Conference Bar

    • 11:30 13:30
      Plenary Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Prof. Alexander Degtyarev (Professor)
      • 11:30
        JOIN² a publication database and repository based on Invenio 30m
        JOIN² is a shared repository infrastructure that brings together eight research institutes for the development of a full-fledged scholarly publication database and repository based on the Invenio v1.3 open source framework for large-scale digital repositories. Six JOIN² instances are already successfully deployed and two more institutes have joined seamlessly during the last year, resulting in the overall consolidation of the system and its functionalities. JOIN² provides a general solution built around a well-defined publication workflow which represents the cornerstone of the JOIN² paradigm. Always preferring simplicity to complexity and implementing a convergent, inclusive solution, the JOIN² members have consolidated their successful development workflow and collaboration. This presentation highlights how JOIN² is able to address the needs of a heterogeneous group of research centers. Building on a 100% open source framework and around the users' needs, JOIN² has developed a publication database and repository (and library management system) able to address the needs of an expanding set of diverse research centers providing rich functionalities in the simplest way. The definition and enforcement of a uniform publication workflow is at the core of the JOIN² approach. We believe the JOIN² collaboration model to have proven very successful.
        Speaker: Mr Alexander Wagner (Deutsches Elektronen-Synchrotron, DESY)
        Slides
      • 12:00
        Benchmarking WLCG resources using HEP experiment workflows 30m
        The benchmarking and accounting of compute resources in WLCG needs to be revised in view of the adoption by the LHC experiments of heterogeneous computing resources based on x86 CPUs, GPUs, FPGAs. After evaluating several alternatives for the replacement of HS06, the HEPIX benchmarking WG has chosen to focus on the development of a HEP-specific suite based on actual software workloads of the LHC experiments, rather than on a standard industrial benchmark like the new SPEC CPU 2017 suite. This presentation will describe the motivation and implementation of this new benchmark suite, which is based on container technologies to ensure portability and reproducibility. This approach is designed to provide a better correlation between the new benchmark and the actual production workloads of the experiments. It also offers the possibility to separately explore and describe the independent architectural features of different computing resource types, which is expected to be increasingly important with the growing heterogeneity of the HEP computing landscape. In particular, an overview of the initial developments to address the benchmarking of non-traditional computing resources such as HPCs and GPUs will also be provided. On behalf of the HEPiX CPU Benchmarking Working Group [1,2] [1] https://w3.hepix.org/benchmarking.html [2] https://twiki.cern.ch/twiki/bin/view/HEPIX/CpuBenchmark
        Speaker: Dr Andrea Valassi (CERN)
        Slides
      • 12:30
        Towards Russian National Data Lake Prototype 20m
        The evolution of the computing facilities and the way storage will be organized and consolidated will play a key role in how this possible shortage of resources will be addressed by the LHC experiments. The need for an effective distributed data storage has been identified as fundamental from the beginning of LHC, and this topic has became particularly vital in the light of the preparation for the HL-LHC run. WLCG has started an R&D within DOMA project and in this contribution we will report the recent results related to the Russian federated data storage systems configuration and testing. We will describe different system configurations and various approaches to test data storage federation. We are considering EOS and dCache storage systems as a backbone software for data federation and xCache for data caching. We'll also report about synthetic tests and experiments specific tests developed by ATLAS and ALICE for federated storage prototype in Russia. Data Lake project has been launched in Russian Federation in 2019 to set up a National Data Lake prototype for HENP and to consolidate geographically distributed data storage systems connected by fast network with low latency, we will report the project objectives and status.
        Speaker: Mr Andrey Kiryanov (PNPI)
        Slides
      • 12:50
        Dell EMC Ready Solutions for HPC 20m
      • 13:10
        The case for energy-efficient High Performance Computing 20m
        in progress
        Speaker: Alexander MOSKOVSKY (RSC)
    • 11:30 13:30
      Round Table: NIAGARA Splendid Conference & SPA Resort, Conference Hall Crnojeviċa

      Splendid Conference & SPA Resort, Conference Hall Crnojeviċa

      • 11:30
        Why and how processor changing its microarchitecture. New supercomputers are different - why and how? Summit - first results 2h
        Speaker: Alexey PEREVOZCHIKOV (Niagara)
    • 13:30 15:00
      LUNCH 1h 30m
    • 15:00 20:00
      Excursion: Splendid - Kotor – Perast – Gospa - Perast - Splendid Departure from hotel Splendid at 15:00

      Departure from hotel Splendid at 15:00

      • 15:00
        Splendid - Kotor – Perast – Gospa - Perast - Splendid 20m
    • 09:00 11:15
      Detector & Nuclear Electronics Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr Nikolay Gorbunov (JINR)
      • 09:00
        Radiation Damage Studies of Silicon Photomultipliers in Neutrons Field of IBR-2 15m
        It is reported on the study of radiation resistance of silicon photomultipliers (SiPM) produced by HAMAMATSU. SiPM was irradiated in neutron fluxes of the reactor IBR-2 of JINR. The tested SiPM received fluence from 1012 up to 2x1014 of neutrons/cm2. Irradiated detectors investigated using a radioactive source and laser flashes at a temperature of -300C. The measurements showed that the SiPM remain fully functional as photon detectors up to neutron fluence 2x1014 despite a significant increase in noise.
        Speaker: Dr Sergei Afanasiev (JINR)
      • 09:15
        Stand for the investigation radiation hardness of the plastic scintillators and reflectors 15m
        Created the experimental stand for the investigation radiation hardness of the plastic scintillators. Studied two types on polystyrene based samples (UPS-923A and SCSN-81) and two types of polyvinyltoluene based samples (BC-408 and EJ-260). Studied the radiation damage of ESR and Tyvek reflectors, Paint+TiO2 and PMS+TiO2 coatings.
        Speaker: Dr Sergei Afanasiev (JINR)
      • 09:30
        The software and solutions for express processing of the raw list mode data measured on the neutron spectrometers of the IBR-2 reactor using a delay line position-sensitive detector as designed to be integrated into the experiment control system 15m
        Recently we have performed a comparative study of the characteristics of the data acquisition systems for the position-sensitive detectors with a delay line operating on the neutron instruments of the IBR-2 reactor. As a result, to have an optimal version of electronics we have chosen two directions of further development: the DeLiDAQ-2 system for high-flux measurements and the CAEN N6730 digitizer-based system for high-precision experiments. The study has also revealed an urgent need to integrate list mode measurements into the experiment control system on some of the neutron spectrometers. So far, the experiment control system SONIX operating on most of the IBR-2 spectrometers has received and displayed the data measured in the histogram mode. The report, besides the results of the comparative study also describes the software that is developed to solve the task of formation of events from raw data, their sorting, selecting by appropriate criteria, histogramming and to be appropriate for integration into the SONIX. The proposed solutions are not limited to any specific types of electronics for PSD.
        Speaker: Dr Elena Litvinenko (JINR)
        Slides
      • 09:45
        ELECTRONICS OF STRAW TRACKERS IN NA62, NA64 AND SPD EXPERIMENTS 15m
        For the NA62, NA64 and the SPD experiments used charge particle trackers based on plastic drift tubes (straws). The main tracker parameters are spatial resolution, track efficiency, rate capability, low radiation length and reliability. Straw spatial resolution depends from precision of first electron drift time measurements from track to anode wire. RT-dependence is used for transfer time to coordinate and it is measured with low threshold of FE electronics (few fC) and TDC time resolution order 1 ns per channel. In the talk will be presented overview of NA62, NA64 and SPD straw-trackers, options of FE electronics and TDCs and discussed possible alternatives.
        Speaker: Mr Temur Enik (Russia)
      • 10:00
        Front-End Electronics for TPC/MPD detector of NICA project 15m
        Time Projection Chamber (TPC) is the main tracker of the Multi-Purpose Detector (MPD). The detector will operate at one of beam interaction points of the collider NICA (Nuclotron-based Ion Collider fAcility) and it is optimized to investigate heavy-ion collisions in the energy range from 4 to 11 GeV/n. The TPC Front-End Electronics (FEE) will operate with event rate up to 7 kHz at average luminosity 1027 cm-2s-1 for gold collisions at 9 GeV/n. The FEE is based on the novel ASIC SAMPA, FPGAs and high-speed serial links. Each of 24 readout chambers will serve by 62 Front-End Cards (FECs) and one Readout and Control Unit (RCU). The whole system will contain 1488 FECs, 24 RCUs which gives us 95232 registration channels. The report presents current status of the FEE and results of the FEC testing.
        Speaker: Mr Stepan Vereschagin (Joint Institute for Nuclear Research)
        Slides
      • 10:15
        Current-Voltage Characteristics of Aluminium and Zinc Implanted Silicon for Radiation Detection Applications 15m
        In this research, a comprehensive review of the current-voltage (I-V) measurements that were carried out on Schottky diodes fabricated on undoped and metal doped silicon is presented. The metals used are Aluminium and Zinc. A change in silicon conductivity due to the implantation was investigated by use of I-V technique at room temperature. The qualitative analysis of the I-V characteristics showed that the implantation reduces the measured current of the material. The decrease in current indicates that the resistivity of the material has increased after ion implantation. The implanted silicon also shows ohmic I-V behaviour. Based on these two features, high resistivity, and ohmic behaviour, the results of this work, in general, show that in silicon, Aluminium and Zinc is responsible for relaxation behaviour of the material. The relaxation behaviour is due to a defect level that is found at the mid-gap of silicon to act as generation-recombination centers to compensate charge carriers. The relaxation material has been found to be resistant to radiation damage. This conversion of the material to relaxation behaviour indicated that both metals are suitable dopants for radiation- hardness of the material. Thus, these metals can be used to improve the properties of silicon to fabricate the device that can be used as an efficient radiation detector for the current and future high energy physics experiments. Additionally, a change in electrical parameters such as ideality factor, saturation current and Schottky barrier height, due to metal doping was explained in terms of manipulation of silicon bulk.
        Speaker: Mr DUKE OEBA (University of South Africa)
        Slides
      • 10:30
        Trigger and beam monitoring system of BM@N and SRC experiments 15m
        The report describes a Trigger unit control and monitoring system used at experiments BM@N and SRC held in JINR. These both experiments require very good trigger time resolution therefore the trigger equipment must be located in the beam area to provide small cable length. This applies restrictions to the access to the trigger equipment during the experiment and the trigger control and adjustments should be done remotely. The trigger processor is built using FPGA and all trigger logics and delay lines are located inside this FPGA. The control of trigger logics, trigger adjustment and monitoring is performed with a set of programs with graphical user interfaces. This set includes HV power supply server, the trigger unit manager also providing front-end electronics LV power supplies, a web-server publishing the spill summary information and the beam data server providing publishing in real time mode the experiment-relevant curves like an actual beam intensity and counts. A beam spill summary information and trigger-relevant data are published by TCP/IP server and it is transferred to the experiment slow control system as JSON data blocks. In addition to this the trigger unit manager provides beam information archiving to the local log file. This file could be browsed with the GUI-based application. The system was successively used during more than three years.
        Speaker: Dr Sergey Sergeev (JINR)
        Slides
      • 10:45
        Project of a fast interaction trigger for MPD experiment 15m
        The Fast Forward Detector based Level 0 Trigger system architecture is described. The system must provide fast and effective triggering on nucleus – nucleus collisions at the center of the setup with high efficiency for central and semi-central Au + Au collisions. It should identify z- position of the collision with uncertainty better than 5 cm and an event multiplicity in pseudorapidity interval of 2.7 < |η| < 4.1. The system is modular and consists of two arm signal processors and a vertex processor. FPGAs are widely used. The arm processor crates are located at both sides of MPD magnet yoke to provide minimal cable length and a vertex processor crate is located at the middle of rack line. Each arm processor receives information from 80 FFD cells and provides preliminary processing of it. The result of pre-processing is sent to the vertex processor. This information includes multiplicity of hits in FFD cells and a time mark signal of the first hit. The arm processor crate also contains a front-end low voltage power supplies. The vertex processor performs the final trigger processing including estimation of summary FFD hits multiplicity and estimation of Z-coordinate of interaction. The vertex and arm processors contain interface modules with optical links. Since all this equipment is located in the experimental area and it is exposed to high energy particle irradiation having influence to the FPGA configuration RAM, the processor units containing FPGA are equipped with configuration loading modules. These modules have a library of FPGA configuration files on board and they could provide simultaneous reloading of FPGA RAM to all system FPGAs by a single command.
        Speaker: Dr Sergey Sergeev (JINR)
        Slides
    • 09:00 11:15
      Machine Learning Algorithms and Big Data Analytics Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Prof. Gennady Ososkov (Joint Institute for Nuclear Research)
      • 09:00
        Structural approach to the deep learning method 15m
        When considering any method customary to distinguish several structural levels: syntax, semantics, pragmatics. Syntax gives the ability to apply the method in question, semantics helps set tasks, and pragmatics answers the questions: what is the essence method, what is the place of the method among other methods. In this paper, the authors apply this approach to the consideration of thedeep learning. Considers the syntax: how can practically use this method. Semantics: what are the approaches exist under this method, how do these approaches relate to solvable problems. Pragmatics: the genesis of this method is examined, reasons for its popularity, possible applications of this method, its restrictions. As a result, the authors conclude about the prospects for the use of the deep learning method for a number of practical problems.
        Speaker: Dr Dmitry Kulyabov (PFUR & JINR)
        Slides
      • 09:15
        The use of CNN for image analysis from Cherenkov telescopes in the TAIGA experiment 15m
        The method of artificial neural networks is a modern powerful tool for solving various problems for which it is difficult to propose well-formalized solutions. These tasks are various aspects of image analysis. This paper describes the use of convolutional neural networks (CNN) for the problems of classifying the type of primary particles and estimating their energy using images obtained from the Cherenkov telescope (IACT) in the TAIGA experiment. For the problem of classifying primary particles, it has been shown that the use of CNN made it possible to significantly improve the quality criterion for the correct classification of gammas compared to traditional methods using the Hillas parameters. For the problem of estimating the energy of primary gammas, the use of CNN made it possible to obtain good results for wide air showers, whose centers are located far enough from the telescope. In particular, it is important for the Cherenkov telescope in the TAIGA experiment, which uses a wide-angle camera when traditional methods do not work. Neural networks were implemented using the PyTorch and TensorFlow libraries. Monte Carlo event sets obtained using the CORSIKA program were used to train CNN. CNN training was performed on both ordinary servers and servers equipped with Tesla P100 GPUs. This work was supported by the RSF Grant No. 18-41-06003.
        Speaker: Dr Alexander Kryukov (SINP MSU)
      • 09:30
        A study on performance assessment of essential clustering algorithms for the interactive visual analysis toolkit InVEx 15m
        Interactive visual analysis tools bring the ability of the real-time discovery of knowledge in large and complex datasets using visual analytics. It involves multiple iterations of data processing using various data handling approaches and the efficiency of the whole chain of the analysis process depends on the performance of chosen techniques and related implementations, as well as the quality of applied methods. Stages, where data processing includes intellectual handling (i.e., data mining and machine learning), which are the most resource-intensive, require a distinct attention for evaluation of different approaches. Clustering is one of such machine learning techniques that is commonly used to discover groups of data objects for further analysis. This work is focused on evaluation of clustering algorithms within the interactive visual analysis toolkit InVEx (Interactive Visual Explorer). InVEx represents a visual analytics approach aimed at cluster analysis and in-depth study of implicit correlations between multidimensional data objects. It is originally designed to enhance the analysis of computing metadata of the ATLAS experiment at the LHC for operational needs, but it also provides the same capabilities for other domains to analyze large amounts of multidimensional data. The experiments and evaluation processes are carried out using operational data from the supercomputer at the Lomonosov Moscow State University. These processes includes benchmark tests to assess the relative performance between chosen clustering algorithms and corresponding metrics to assess the quality of produced clusters. Obtained results will be used as guidelines in assisting users in a process of visual analysis using InVEx.
        Speaker: Mikhail Titov (Lomonosov Moscow State University)
        Slides
      • 09:45
        Identification of tau lepton using Deep Learning techniques at CMS 15m
        The reconstruction and identification of tau lepton in semi-leptonic (hereinafter referred to as hadronic decays) are crucial for all analyses with tau leptons in the final state. To discriminate the hadronic decays of tau from all 3 main backgrounds (quark or gluon jets, electrons, and muons), maintaining a low rate of misidentification (below 1%) and at the same time with high efficiency on the signal, the information of multiple CMS sub-detectors must be combined. Application of deep machine learning techniques allows exploiting the available information in a very efficient way. Introduction of a new multi-class DNN-based discriminator provides considerable improvement of the tau identification performance at CMS.
        Speaker: Konstantin Androsov (INFN Pisa (Italy))
        Slides
      • 10:00
        DijetGAN: A Generative-Adversarial Network Approach for the Simulation of QCD Dijet Events at the LHC 15m
        We present a Generative-Adversarial Network (GAN) based on convolutional neural networks that are used to simulate the production of pairs of jets at the LHC. The GAN is trained on events generated using MadGraph5 + Pythia8, and Delphes3 fast detector simulation. A number of kinematic distributions both at Monte Carlo truth level and after the detector simulation can be reproduced by the generator network with a very good level of agreement. Our GAN can generate 1 million events in less than a minute and can be used to increase the size of Monte Carlo samples used by LHC experiments that are currently limited by the high CPU time required to generate events.
        Speaker: Dr Michele Faucci Giannelli (University of Edinburgh)
        Slides
      • 10:15
        Anomaly detection and breakdown prediction in RF power source output: a review of approaches 15m
        Linear accelerators are complex machines potentially confronted with significant downtimes periods due to anomalies and subsequent breakdowns in one or more components. The need for reliable operations of linear accelerators is critical for the spread of this technique in medical environment. At CERN, where LINACs are used for particle research, similar issues are encountered, such as the appearance of jitters in plasma sources (2MHz RF generators), that can have significant impact on the subsequent beam quality in the accelerator. The “SmartLINAC” project was established as an effort to increase LINACs’ reliability by means of early anomaly detection and prediction in its operations, down to the component level. The research described in this article reviews the different techniques used to detect anomalies, from their earlier signals, using data from 2MHz RF generators. This research is an important step forward in the SmartLINAC project, but represents only its beginning. The authors used four different techniques in an effort to determine the most appropriate one to detect anomalies on the generators’ data. The main challenge came from the nature of the data having a high noise-to-signal ratio and presenting several kinds of anomalies from different sources, and from the lack of available exhaustive and precise labelling. The techniques are based on different approaches using machines learning and statistics. This research allowed us to understand better the nature of the data we are working with. Through it, we encountered characteristics present in the data we hadn’t forecast, allowing us to start addressing the project’s objectives, not only identifying and differentiating possible anomalies, but also forecasting to some extent potential breakdowns.
        Speaker: Mr Yann Donon (Samara National research University)
        Slides
      • 10:30
        Investigating new neural network methods for the NOvA experiment 15m
        The NOvA neutrino detector experiment is one of the first High Energy Physics experiments to use neural networks (specifically convolutional neural networks, or CNNs) extensively for its analysis. Results have been published using CNNs to categorize events based on the interaction type, and work is being done to use CNNs to reconstruct other event properties and kinematics. We will present an investigation into new methods that may assist the standard NOvA CNN in categorizing events, or that may reduce the network size and training time, allowing for better optimization of network hyperparameters, and thereby improving performance.
        Speaker: Mr Christopher Kullenberg (JINR)
        Slides
      • 10:45
        Tracking for BM@N GEM detector on the basis of graph neural network 15m
        Particle tracking is a very important part of modern high energy physics experiments. While the data stream from such experiments is increasing day by day, current tracking methods lack the ability to fit these amounts of data. In order to solve this problem, new effective machine learning algorithms are actively developed in the HEP.TrkX project for Large Hadron Collider detector and for the GEM detector of the BM@N experiment. This work is a logical continuation of the research presented on the XXIII International Scientific Conference of Young Scientists and Specialists (AYSS-2019), where we have already introduced a new application for tracking with the Graph Neural Network based on the Minimum Branching Tree preprocessing. However, that approach has had some problems, including the overall inaccuracy with the segments purification of the preprocessed event graph. In this work, we overcome many of such problems and introduce a revised approach with improved tracking accuracy. Promising results of the improved GNN tracking are given.
        Speaker: Mr Egor Shchavelev (Saint Petersburg State University)
        Slides
      • 11:00
        The new machine learning approach for exclusion of afterpulses in drift chamber data 15m
        Large-scale coordinate-tracking detector TREK based on multi-wire drift chambers is being developed at the Experimental complex NEVOD in MEPhI to study near-horizontal dense muon bundles generated by ultra-high energy cosmic rays. The total area of the setup is 250 m2. The main goal of the installation is the solution of so-called “muon puzzle” – observed excess of the number of muons in extensive air showers compared to simulations. The use of drift chambers from IHEP accelerator experiment allow to reconstruct events with very high density of muons in a bundle (more than 10 particles per m2). But the presence of afterpulses in the response of chamber electronics leads to reconstruction of fake tracks. Attempts to exclude afterpulses by ordinary analytical methods (e.g., by means of pulse duration) failed. A new way to solve the afterpulse problem is the use of machine learning. The method based on convolutional neural network is being developed. This approach can take into account both drift times and duration of signals. The talk presents results of employment of the neural network trained on data obtained by analytical and Garfield++ simulation with different muon multiplicities and angles in the bundles.
        Speaker: Mr Vladislav Vorobyev (MEPhI)
        Slides
    • 11:15 11:30
      Coffee break 15m Conference Bar

      Conference Bar

    • 11:30 13:30
      Machine Learning Algorithms and Big Data Analytics Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Dr Petr Zrelov (LIT JINR)
      • 11:30
        LOOT: Novel end-to-end trainable convolutional neural network for particle track reconstruction 15m
        We introduce a radically new approach to the particle track reconstruction problem for tracking detectors of HEP experiments. We developed the end-to-end trainable YOLO-like convolutional neural network named Look Once On Tracks (LOOT) which can process the whole event representing it as an image, but instead of three RGB channels, we use, as channels in depth, discretized contents of sequential detector coordinate stations. The LOOT neural net avoids all problems of the existing sequential tracking algorithms because it does computations in one shot. The first results of the algorithm's application to the data from the Monte-Carlo simulations are presented and discussed. The reported study was funded by RFBR, project number 19-57-53002 Keywords: tracking, GEM detector, YOLO, convolutional neural network, particle track reconstruction
        Speaker: Mr Pavel Goncharov (Sukhoi State Technical University of Gomel, Gomel, Belarus)
        Slides
      • 11:45
        Global Neutrino Analysis framework and GPU based computations 15m
        GNA is a high-performance fitting framework developed for the data analysis of the neutrino experiments. The framework is based on data flow principles: an experiment model is represented by the computational graph of simple functions as separate nodes that are computed lazily. In this work, we describe the GPU support library for GNA named cuGNA which uses CUDA toolkit. This library is implemented to enable both the performance of GPU and the versatility of data flow approach. We have added GPU-based node implementation to the existing library as well as implemented GNA core features that make GPU support hidden from the end user. Current status of CUDA computations in GNA, tests on real-life computational graphs, and performance comparison to CPU-based models are presented in this work.
        Speaker: Ms Anna Fatkina (JINR)
        Slides
      • 12:00
        Accelerating the particle-in-cell method of plasma and particle beam simulation using CUDA tools 15m
        For simulating the dynamics of charged particles in electric and magnetic fields a particle-in-cell (PIC) method is often used. In it the position and velocity of each particle or superparticle is tracked, while the charge density and current density necessary to simulate particle interactions are computed on a stationary mesh. Several approaches are availible to integrating the particle equations of motion (particle mover) and calculating the electric and magnetic fields (field solver). The Ef and Ef_python applications aim to use particle-in-cell to simulate plasma and particle beams in an electron beam ion source. The main goal, unlike many other PIC applications, is not to simulate free plasma, but ion sources. As the particle mover the well-established leapfrog second-order explicit method is used, also known as the Boris scheme. The electric field created by charged particles and conducting regions is calculated using the finite-difference method of solving the Poisson equation on a rectangular regular grid. Each PIC simulation time step consists of the following operations: advance particle positions and momenta, generate new particles, calculate charge density, compute electric potential, calculate electric field. The simulation performance for each step depends on the number of particles, as well as the number of spatial mesh cells, with different operations dominating the performance considerations. The field solver usually represents a major portion of computational difficluty. This report describes the efforts to profile and accelereate the the ef_python application using the cupy library. With minimal changes to the program, cupy allowed to perform most of the simulation operations on the GPU through CUDA. In addition, to accelerate the field solver algebraic multigrid methods were utilized, provided by the PyAMG library on the CPU and the AMGX library on the GPU. Major speed-up was achieved especially with fine spatial grids and a powerful GPU in the HybriLIT cluster.
        Speaker: Mr Ivan Kadochnikov (JINR)
        Slides
      • 12:15
        Blocking strategies to accelerate record matching for Big Data integration 15m
        Record matching represents a key step in Big Data analysis, especially important to leverage dis-parate large data sources. Methods of probabilistic record linkage provide a good framework to estimate and interpret partial record matches. However, they require combining string distances for the compared records. That is, direct use of probabilistic record linkage requires processing the Cartesian product of record sets. A “blocking” step is often used where candidate record pairs are required to match exactly on a categorical column, greatly limiting the number of record comparisons and computational cost. However, this method requires a level of data quality and agreement between sources on the cat-egorical column. We propose a more flexible approach for situations where no good blocking col-umn can be chosen. The key idea is to use approximate nearest neighbor search as the blocking filter. One possible method is to vectorize one string column with TF or TF/IDF into term frequency vectors, then use Location Sensitive Hashing to quickly search for approximate nearest neighbors in this vector space. Apache Spark libraries were used to show the effectiveness of this approach for linking open company registration datasets.
        Speaker: Mr Ivan Kadochnikov (JINR, PRUE)
        Slides
      • 12:30
        Big Data technologies for labour market analysis 15m
        This paper discusses some approaches to the intellectual text analysis in application to automated monitoring of the labour market. The scheme of construction of an analytical system based on Big Data technologies for the labour market is proposed. Were compared the combinations of methods of extracting semantic information about objects and connections between them (for example, from job advertisements) from specialized texts. A system for monitoring of the Russian labour market has been created, and the work is underway to include other countries in the analysis. The considered approaches and methods can be widely used to extract knowledge from large amounts of texts.
        Speaker: Sergey Belov (Joint Institute for Nuclear Research)
        Slides
      • 12:45
        DYNAMIC APACHE SPARK CLUSTER FOR ECONOMIC MODELING 15m
        Modern econometric modeling of macroeconomic processes usually meets certain challenges due to the incompleteness and heterogeneity of the initial information, as well as huge data volumes involved. In the work, on the example of modeling the level of employment in the regions of the Russian Federation was shown the effectiveness of joint using Big Data technologies and automated deployment of a dynamic virtual computing cluster for solving such problems. There were constructed several models of the regional labor market, taking into account such basic macroeconomic indicators as per capita income, the volume of paid services to the population per capita, the industrial production index and others. The classification of the subjects of the Russian Federation according to the level of employment was obtained, it is stable against different methods (single linkage, complete linkage, Ward's method). For the analysis, it was used a dynamic Apache Spark cluster deployed by the means of the SIMPLE environment developed at CERN.
        Speaker: Ms IULIIA GAVRILENKO (Research Assistant, Plekhanov Russian University of Economics, Moscow, Russia)
        Slides
      • 13:00
        Simulating Lattice QCD on the "Govorun" Supercomputer 15m
        Lattice Quantum Chromodynamics (QCD) is a well-established non-perturbative approach to the theory of strong interactions, QCD. It provides a framework for numerical studies of various complex problems of QCD. Such computations are numerically very demanding and require the most powerful modern supercomputers and algorithms. Within this talk, the lattice QCD simulations which are carried out on "Govorun" supercomputer are discussed. The basic algorithms and their implementation on "Govorun" architecture are reviewed. Important physical results and projects which are studied on "Govorun", including QCD at finite temperature, isospin and baryon density, are presented.
        Speaker: Andrey Kotov (Institute for Theoretical and Experimental Physics, Joint Institute for Nuclear Research)
        Slides
      • 13:15
        Hit finder and track reconstruction algorithms in the Multi-Wire Proportional Chambers of BM@N experiment 15m
        BM@N (Baryonic Matter at Nuclotron) is an experiment being developed at Joint Institute for Nuclear Research (Dubna, Russia). It is considered the first step towards implementing the fixed target program at NICA accelerating complex (Nuclotron-based Ion Collider fAcillty). One of the important event reconstruction procedure components is the monitoring of the beam trajectory and the vertex position in transverse plane. A system consisting of two Multi-Wire Proportional Chambers (MWPC) is used for this purpose in BM@N. In this work we describe the hit finder and track reconstruction algorithms in the MWPC. Results of Monte-Carlo tests and efficiency calculations for different input parameters are presented. MWPC track analysis procedure of the BM@N experimental data from RUN-2018 is started and the first results are shown.
        Speaker: Prof. Sergei Nemnyugin (Saint-Petersburg State University)
        Slides
    • 11:30 13:30
      Triggering, Data Acquisition, Control Systems Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr Oleg Strekalovsky (JINR)
      • 11:30
        Design challenges of the CMS High Granularity Calorimeter Level 1 trigger 15m
        The high luminosity (HL) LHC will pose significant detector challenges for radiation tolerance and event pileup, especially for forward calorimetry, and this will provide a benchmark for future hadron colliders. The CMS experiment has chosen a novel high granularity calorimeter (HGCAL) for the forward region as part of its planned Phase 2 upgrade for the HL-LHC. Based largely on silicon sensors, the HGCAL features unprecedented transverse and longitudinal readout segmentation which will be exploited in the upgraded Level 1 (L1) trigger system. The high channel granularity results in around one million trigger channels in total, to be compared with the 2000 trigger channels in the endcaps of the current detector. This presents a significant challenge in terms of data manipulation and processing for the trigger. In addition, the high luminosity will result in an average of 140 interactions per bunch crossing that give a huge background rate in the forward region and these will need to be efficiently rejected by the trigger algorithms. Furthermore, three-dimensional reconstruction of the HGCAL clusters in events with high hit rates is also a more complex computational problem for the trigger than the two-dimensional reconstruction in the current CMS calorimeter trigger. The status of the trigger architecture and design, as well as the concepts for the algorithms needed in order to tackle these major issues and their impact on trigger object performance, will be presented.
        Speaker: Dr Vito Palladino (Imperial College London)
        Slides
      • 11:45
        CMS Drift Tubes at High-Luminosity LHC: chamber longevity and upgrade of the detector electronics 15m
        Drift Tubes (DT) equip the barrel region of the CMS muon spectrometer serving both as tracking and triggering detector. At High-Luminosity LHC (HL-LHC) they will be challenged to operate at background rates and withstand integrated doses well beyond the specifications for which they were initially designed. Longevity studies show that, though a certain degree of ageing is expected, a replacement of the DT chambers is not needed for CMS to operate successfully at HL-LHC. On the other hand, the on-board readout and trigger electronics which presently equip the chambers are not expected to cope with the harsh HL-LHC conditions. For this reason, they will be replaced with time-to-digital converters (TDCs) streaming hits to a back-end electronics system where trigger segments reconstruction and readout event matching will be performed. This new architecture will allow to operate local reconstruction on the trigger electronics exploiting the full detector granularity and the ultimate DT cell resolution. Already over the second LHC long shutdown, a slice-test system consisting of four DT chambers will operate using the upgraded electronics, as an early test of the HL-LHC DT setup. In this document we outline the present knowledge about the DT detector longevity. Furthermore, we describe the prototype electronics and back-end demonstrators, as well as the state of the art of the local trigger algorithms that are being designed to run in the upgraded DT system. Performance measurements of the upgraded DT trigger, based on simulations, will be presented, highlighting their impact on the CMS muon trigger at large. The status of the operation of the DT slice-test will be also covered, with emphasis on the status of the implementation of the trigger algorithms in hardware.
        Speaker: Dr Carlo Battilana (INFN)
        Slides
      • 12:00
        Performance of the Pixel Luminosity Telescope for Luminosity Measurement at CMS during Run2 15m
        The Pixel Luminosity Telescope (PLT) is a dedicated system for luminosity measurement at the CMS experiment using silicon pixel sensors arranged into "telescopes", each consisting of three planes. It was installed in CMS at the beginning of 2015 and has been providing online and offline luminosity measurements throughout Run 2 of the LHC (2015-2018). The online bunch-by-bunch luminosity measurement reads out at the full bunch crossing rate of 40 MHz, using the "fast-or" capability of the pixel readout chip to identify events where a hit is registered in all three sensors in a telescope, corresponding primarily to tracks originating from the interaction point. In addition, the full pixel information is read out at a lower rate, allowing for studies with full track reconstruction. In this talk, we will present the results and techniques used during Run 2, including commissioning, luminosity calibration using Van der Meer scans, and measurement and correction of stability and linearity effects using data from emittance scans.
        Speaker: Mr Francesco Romeo (Vanderbilt University)
        Slides
      • 12:15
        Particle detection system for SHE synthesis at DGFRS-II 15m
        This talk will provide information about particle detection chain for the new Dubna gas-filled recoil separator-II. Detector chamber itself consists of time-of-flight system and implantation double-sided silicon strip detector surrounded by six single side strip detectors (overall 224 channels). Main part of the talk will be focused on PXI based Alpha & Gamma spectrometer which will handle each channel from DSSSD and side detectors independently. This leads to PC controlled PXI multi crate system with timestamp sync and more. Last part will be about DAQ software written in C++ and relevant online monitoring client.
        Speaker: Mr Leo Schlattauer (Palacky University Olomouc, Czech Republic)
        Slides
      • 12:30
        Data Quality Monitoring for the CMS Cathode Strip Chambers: implementation, performance and operational experience 15m
        CMS Cathode Strip Chambers (CSC) are used for muon identification, measurements, and trigger in the forward direction. The system comprises 540 six-layer detectors with the overall sensitive area of about 6000 m^2 and has more than 500K electronics readout channels. Automated CSC Data Quality Monitoring system (CSC DQM) is an integral part of CSC commissioning and operation, as well as of the central CMS monitoring system. The system monitors CSC data format integrity, hardware-set diagnostic bits, and a large number of occupancy distributions (>100K histograms). It is designed to detect problems ranging from dead/intermittent or noisy channels/boards, readout timing discrepancies to inconsistencies in higher level reconstructed physics objects, and to propagate automatically corresponding alarms, with a properly assigned level of severity and a possible troubleshooting diagnostic, to the top level. Given a significant number of monitor-able objects, the system implementation specifically focuses on reducing complexity of monitoring and problems detection procedures for an end-user. The DQM system functions in various modes: local CSC system running, global CMS running, and offline (i.e. using already recorded data). In this contribution an overview of implementation, general performance, and operational experience are provided.
        Speaker: Mr Victor Barashko (University of Florida)
        Slides
      • 12:45
        Modernization of neutron Fourier chopper for High-resolution Fourier diffractometer (HRFD) 15m
        The High-resolution Fourier diffractometer (HRFD) is operated at the pulsed reactor IBR-2 of FLNP JINR allowing to carry out precision research of the crystal structure and microstructure of inorganic materials. The use of the fast Fourier chopper for intensity modulation of the primary neutron beam and the correlation method of diffraction data accumulation is a principal feature of the HRFD design. This allows one to obtain extremely high resolution (Δd/d ≈ 0.001) at HRFD in a wide range of interplanar distances at a relatively short flight distance from the chopper to the sample position (L = 20 m). In 2016 the old Fourier-chopper (the operation period ~20 years) was replaced with a new one manufactured by the Mirrotron Ltd company (Hungary). The basic mechanical characteristics of the previous version of the Fourier chopper, particularly, the rotor diameter, the number of slits, the slit length, the slit width at the middle, the absorbing material Gd2O3 and the width of the layer Gd2O3, have been maintained in the new Fourier chopper for HRFD. The rotor is produced from the high-strength Al based alloy and allows a maximum rotation speed of 6000 rpm. As compared to the previous version, the rotor and the stator are installed in a hermetic casing, the mechanical design of the stator allows one to have exact configuration and fixation of the pick-up signal phase, a new type of incremental magnetic pickup sensor of the chopper disk rotation speed with an interpolation factor of 2 instead of 8 is applied, the rotor vacuum and vibration monitoring sensors are installed, a new control system for stator position is used. The new pick-up signal sensor and control system have allowed one to decrease the differential nonlinearity of the rotor instant speed to ~2.5%. The chopper control and monitoring system based on the software logic controller Omron provides the predefined law of change in the Fourier-chopper rotation speed and monitors the readings of the vacuum, vibration and temperature control sensors.
        Speaker: Mr Nikolay Zernin (JINR, FLNP, Department of Spectrometers Complex (DSC))
        Slides
      • 13:00
        Modernization of the Management and Control System for the Cold Neutron Moderator at the Fast Pulsed Reactor 15m
        The management and control system of the cold neutron moderator allows the engineering staff to monitor, in the process of its operation, the main parameters of the moderator, including the gas blower rotation speed, the consumption and temperature of helium, the vacuum in the jacket, and movement of pellets in the transport pipe. Today, complex upgrading of the cold neutron moderator at the fast pulsed reactor "IBR-2M" is underway. The paper presents the current version of the structure of the management and control system for the cold moderator. Interface convertors are used to connect the management and control equipment of the cold moderator to the computer. The main interface of the data acquisition and control system of the executive devices is RS-485. Specialized software has been created to operate the system of management and control of the cold neutron moderator.
        Speaker: Mr Alexey Altynov (JINR)
        Slides
      • 13:15
        The New Data Acquisition System MPD-32 for the High-Resolution Fourier Diffractometer at the IBR-2 Pulsed Reactor 15m
        In the Laboratory of Neutron Physics a new high-performance data acquisition system (DAQ) is being developed in the framework of the project on creation of a high-aperture backscattering detector (BSD) for the high-resolution Fourier diffractometer HRFD. The designed increase in the BSD aperture of 12.5 times together with an increase in the neutron flux on the sample by a factor of 2-3 due to employment of the new neutron guide demand for raising the neutron registration rate to ~ 3*107 n/s [1]. In addition to signals from the multielement scintillation detector BSD, time encoders also digitize pick-up signals from the chopper as well as of reactor startups that are transmitted to the computer in the list mode to be recorded on the disk for further processing. This has required development of new electronics and programs as the MPD-240-based DAQ system used today has the neutron registration limit on the level of ~ 106 n / s. Earlier, in order to increase the transmission capacity of the data acquisition systems with a USB-2 interface for the IBR-2 spectrometers, the FLINK USB 3.0 was developed [2] to provide links between the modules having an optical interface with a computer according to the USB 3.0 protocol. This has solved the problem of increasing the performance of the DAQ systems for all the spectrometers except those for the HRFD that has undergone modernization. This work presents the results of development of a high-performance data acquisition system on the basis of MPD-32 blocks integrated into a common system of a high-speed interblock interface and a USB 3.0 computer interface with an optical fiber extender. References [1] A. Balagurov et al. «High-resolution neutron Fourier diffractometer at the IBR-2 pulsed reactor: A new concept». Nuclear Inst. and Methods in Physics Research B 436 (2018) 263–271. [2] V.V. Shvetsov, V.A. Drozdov. "Increasing Bandwidth of Data Acquisition Systems on IBR-2 Reactor Spectrometers in FLNP". Proceedings of the XXVI International Symposium on Nuclear Electronics & Computing (NEC’2017) Becici, Budva, Montenegro, September 25 - 29, 2017, European repository of the CEUR Workshop Proceedings Vol-2023, pp. 293-298.
        Speaker: Mr Vasilii Shvetsov (FLNP)
        Slides
    • 13:30 15:00
      LUNCH 1h 30m
    • 15:00 16:15
      Research Data Infrastructures Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Dr VIACHESLAV ILIIN (NRC Kurchatov Institute)
      • 15:00
        Database ecosystem is the way to Data Lakes 15m
        The paper examines the existing solutions design of various data warehouses. The main trends in the development of technologies are identified. An analysis of existing big data classifications allowed us to offer our own measure to determine the category of data. On its basis, a new classification of big data has been proposed (taking into account the CAP theorem). A description of the characteristics of the data for each class is given. The developed big data classification is aimed at solving the problems of selecting tools for the development of an ecosystem. The practical significance of the results obtained is shown by the example of determining the type of big data of actual information systems.
        Speaker: Ms Nadezhda Shchegoleva (Saint Petersburg Electrotechnical University "LETI")
      • 15:15
        On Bandwidth on Demand Problem 15m
        In recent years there is a trend in a backbone traffic growth between Data Centers (DC). According to TeleGeography on the most demanded route across the Atlantic ocean, by the end of 2017 the share of such traffic had reached 75%, and in 2023 it will exceed 93%. It can be explained by the development of the global cloud service market, which is currently concentrated in North America and Europe. Therefore the traffic growth between DC is provided mainly by the DCs of cloud providers, as well as enterprise DC that use hybrid clouds. But cloud DC impose special requirements on channel bandwidth allocation and charging policy. The most promising approach to satisfy these requirements is to provide the channel bandwidth on the model “pay and go” - only when there is a need in it, i.e. the bandwidth on demand. Having a high penetration of SDN and NFV technologies within DC, cloud providers and their enterprise customers impose requirements on the SDN and NFV technologies implementation for backbone networks allowing bandwidth on demand (BoD service) to balance the computational load and to carry out the data migration between DCs. In this paper we consider the possible protocols and technologies on different OSI Reference model levels that can help with the implementation of bandwidth on demand. On the transport and network level we can engage multipath protocols that allow the data transmission through several routes simultaneously (we will call data flows on different routes as subflows). There are two schemes of such route generation: static and dynamic. In static route generation scheme there is always the same number of used routes. We will note the drawbacks of such approach that show favour to dynamic scheme when the routes are allocated dynamically in dependence of current network load and bandwidth requirements. Also we discuss the balancing techniques (Equal Cost Multi-Path (ECMP), Ethernet VPN (EVPN), Link Aggregation Group (LAG)), that allow to route different transport flows through different disjoint channels. It can give an advantage when used together with multipath protocols. We also present the mathematical problem statement for the bandwidth on demand problem and considering this problem in relation to the optical transport network (OTN). The main issue here is that optical network is a channel switching network whereas OSI Reference model was developed for packet switching network. We assume that optical network contains Reconfigurable Optical Add-Drop Multiplexors (ROADM) to offer the flexibility to add wavelengths or easily change their destination. In addition, they can be managed remotely, providing full control and monitoring over the entire high capacity infrastructure. To implement bandwidth on demand, the network providers should maintain the certain level of bandwidth reservation. In this paper we wouldn't consider the problem of reservation size selection and would assume that required bandwidth is always available for clients. With such assumption and Menger's theorem we can deduce the number of wavelength that is needed to the data transfer with a given bandwidth requirement. This work is supported by Russian Ministry of Science and Higher Education, grant #05.613.21.0088, unique ID RFMEFI61318X0088.
        Speaker: Evgeniy Stepanov (Lomonosov Moscow State University)
        Slides
      • 15:30
        Distributed Data Management System for LHAASO 15m
        The LHAASO(Large High Altitude Air Shower Observatory) experiment of IHEP will generate 6 PB per year in the future. The massive data processing faces many challenges in the distributed computing environment. Take one for example, some sites may have no local HEP storage which makes the distributed computing unavailable. Our goal is to make the data accessible for LHAASO members from any remote site. To make it work, we use EOS as our local storage system, and use LEAF as the data federation system. LEAF is a data cache and access system across remote sites proposed by IHEP. LEAF presents a unified File System view for both local and remote sites, and supports direct data access on demand. In this report, we will present the system architecture, data workflow and performance evaluation of LEAF in LHAASO.
        Speaker: Dr Haibo Li (Institute of High Energy Physics,Chinese Academy of Sciences)
        Slides
      • 15:45
        Federated storage initiatives at NRC "Kurchatov Institute" 15m
        Several R&D projects in Federated Storage techniques have emerged in the last years with a goal of exploring the evolution of distributed storage in Exabyte era, which is defined by storage demands of HL-LHC and several other international scientific collaborations. In this talk we will report on Federated Storage initiatives at NRC "Kurchatov Institute" including participation in the DataLake project and deployment of a storage federation for in-house computing resources.
        Speaker: Mr Andrey Kiryanov (PNPI)
        Slides
    • 15:00 16:15
      Triggering, Data Acquisition, Control Systems Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr Vladimir Karjavine (JINR)
      • 15:00
        Nano-metrology of macroscopic systems: Laser metrology laboratory, Precision laser inclinometer, Interferometric Length Gauge, data collection method and their presentation. Development perspectives 15m
        The Precision Laser Inclinometer (PLI) represents new type of sensor that is able to measure the slope of a surface in an angular interval of angles (from 0.01 to 100 μrad) and in a frequency range of (10 μHz 1 Hz). The principal feature of the new inclinometer is the precision that can reach by last estimates 5 • 10−9 rad. The inclinometer is essentially a new kind of a two-coordinate angular seismograph for surface waves with the ability to determine the direction of the wave. Interferometric Length Gauge allows you to measure the distance between two strings of two string reference lines. Expected as accuracy of about 10 microns per length 16m with a possible variation of air temperature ± 1 0 C. Here the data from these instruments will be described, the problems associated with this data, the ways of their processing and presentation.
        Speaker: Mr Ivan Bednyakov (JINR)
      • 15:15
        Electronics of the fission fragments spectrometer "COMETA-F" 15m
        The article describes the electronics for the time-of-flight two-arm fission fragments spectrometer COMETA-F. The time pick-off detector is comprised of thin electron conversion foil, electrostatic mirror and of two microchannel plates, supplied by Baspik, mounted in a chevron configuration. Mosaics of Si PiN diodes used to measure both energy and time-of-flight. The waveform of detected signals is digitized by V1742 modules, which sample signals through the DRS4 chip. The DRS4 chip is a switched capacitor array, which can sample the input signal at a frequency of 5 GHs. Start for registration is provided by a specially designed trigger module. The V945 discriminator thresholds are individually settable in a range from -1 mV to -255 mV via VME through an 8-bit DAC. The use of the Si-semiconductor detectors in time-of-flight-energy spectrometry of fission fragments is known to have delicate methodological problems due to the “amplitude (pulse-height) defect” and “plasma delay” effects in the E and TOF channels, respectively. Correct accounting for both effects needs rather complicated procedure of the mass reconstruction. Using off-line algorithm and mass reconstruction procedure based on the PHD parametrization let us to reproduce quite satisfactory ion masses in a wide range of masses and energies.
        Speaker: Dr Oleg Strekalovsky (JINR FLNR)
        Slides
      • 15:30
        APPLICATION OF QUANTUM TECHNOLOGIES FOR THE DEVELOPMENT OF AN INTELLECTUAL CONTROL SYSTEM TO SETUP CURRENTS OF THE CORRECTIVE MAGNETS FOR THE BOOSTER SYNCHROTRON OF THE NICA FACILITY 15m
        One of the promising directions in the development of robust control systems for complex physical facilities is the application of quantum computing for building intelligent controllers based on neural networks and genetic algorithms. The main advantage of the application of quantum technologies is the high speed of adaptation of the intelligent control system (ICS) to changing conditions of functioning. The most promising solution is to use IBM's quantum processor for quickly calculating Grover’s algorithm (GA) to find the “extremum” of the function of a set of control variables. For example, in the process of tuning the frequency of the HF stations of the NICA complex, unexpected “parasitic” oscillations may appear whose frequency spectrum cannot be predicted. In such conditions, the task of developing self-organizing ICS, capable of functioning and ensuring the achievement of the goal of control in emergency situations and information risk conditions, is relevant for the NICA complex.
        Speaker: Mr Dmitrii Monakhov (JINR)
        Slides
      • 15:45
        PROGRAM MANAGER FOR DC-280 CYCLOTRON CONTROL SYSTEM 15m
        March 25, 2019 the experimental hall of the Superheavy Elements Factory (SHE) was opened at the FLNR JINR and its basic facility – the DC-280 cyclotron was launched. The control system software of DC-280 is based on NI LabVIEW platform with the Datalogging and Supervisory Control (DSC) module. It consists of many software programs that perform corresponding tasks: device drivers, alarms monitor, beam diagnostics, user interfacing, etc. The Program Manager was developed to supervise running processes and inform operator in case of failures. The control of new version and updating of the software modules are implemented. This paper describes the algorithm and user interface of the Program Manager.
        Speaker: Mrs Veronika Zabanova (FLNR, JINR)
        Slides
      • 16:00
        The ATLAS Electron and Photon Trigger Performance in Run 2 15m
        ATLAS electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton-proton and heavy ion collisions. Main triggers used during Run 2 (2015-2018) for those physics studies were a single-electron trigger with ET threshold around 25 GeV and a diphoton trigger with thresholds at 25 and 35 GeV. Relying on those simple, general-purpose triggers is seen as a more robust trigger strategy, at a cost of slightly higher trigger output rates, than to use a large number of analysis-specific triggers. To cope with ever-increasing luminosity and more challenging pile-up conditions at the LHC, the trigger selections needed to be optimized to control the rates and keep efficiencies high. The ATLAS electron and photon trigger performance during Run-2 data-taking is presented as well as work ongoing to prepare to even higher luminosity of Run 3 (2021-2023).
        Speaker: Mr Dmitriy Maximov (Budker Institute of Nuclear Physics)
        Slides
    • 16:15 16:30
      Coffee break 15m Conference Bar

      Conference Bar

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • 16:30 18:20
      Computations with Hybrid Systems (CPU, GPU, coprocessors) Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Dr Dmitry Podgainy (JINR)
      • 16:30
        Using the GOVORUN supercomputer for the NICA megaproject 15m
        At present, the GOVORUN supercomputer is used for both theoretical studies and event simulation for the MPD experiment of the NICA megaproject. To generate simulated data of the MPD experiment, the computing components of the GOROVUN supercomputer, i.e. Skylake (2880 computing cores) and KNL (6048 computing cores), are used; data are stored on the ultrafast data storage system (UDSS) under the management of the Lustre file system with a subsequent transfer to cold storages controlled by the EOS and ZFS file systems. UDSS currently has five storage servers with 12 SSD disks using the NVMe connection technology and a total capacity of 120 TB, which ensures low time of access to data and a data acquisition/output rate of 30 TB per second. Due to the UDSS high performance, by September 2019 over 100 million events for the MPD experiment have already been generated and more than 30 million events have already been reconstructed. In future, other MC generators are expected to be used as well. The implementation of different computing models for the NICA megaproject requires confirmation of the model’s efficiency, i.e. meeting the requirements for the time characteristics of acquiring data from detectors with their subsequent transfer to processing, analysis and storage, as well as the requirements for the efficiency of event modeling and processing in the experiment. For these purposes it is necessary to carry out tests in a real software and computing environment, which should include all the required components. At present, the GOVORUN supercomputer is such an environment; it contains the latest computing resources and a hyperconvergent UDSS with a software-defined architecture, which allows providing a maximum flexibility of data storage system configurations. It is planned to use the DIRAC software for managing jobs and the process of reading out/recording/processing data from various types of storages and file systems. All the enumerated above will allow one to check a basic set of data storage and transmission technologies, simulate data flows, choose optimal distributed file systems and increase the efficiency of event modeling and processing. The studies in the given direction were supported by the RFBR grant (“Megascience – NICA”), №18-02-40101 and 18-02-40102.
        Speaker: Dr Dmitry Podgainy (JINR)
        Slides
      • 16:45
        HPC Solutions for HEP applications at IHEP 15m
        High Performance Computing (HPC) is playing a more important role to accelerate High Energy Physics (HEP) computing and scientific discovery. More HEP applications are willing to develop parallelism software to get much better performance. To help Physicists to get scientific output effectively, a Slurm cluster is constructed to provide HPC solutions for multiple applications including HEPS, BES, Lattice QCD, JUNO and etc. The Slurm cluster is consisted with heterogeneous resources including CPU and GPU cards, and MPI and GPU jobs are scheduled based on priority, fair share and resource limit regulations. The report presents the cluster status at the first beginning, later describes HPC solutions for the mentioned HEP applications in details including production system and supported systems, and the last part is about next steps in the near future.
        Speaker: Ms Ran Du (Institute of High Energy Physics)
        Slides
      • 17:00
        Digital Lab Platform implementation for processing and analysis heterogeneous neurobiological data 20m
        This report presented the project of Digital Lab Platform implementation to organize storage, processing and analysis of heterogeneous neurobiological data (MRI, fMRI, EEG), obtained at Kurchatov Institute Resource Center of Nuclear Physical Research Methods "Cognimed". To this goal was created a new Digital Lab module - System "Neuroimaging", which allows to organize the interaction between the Resource Center «Cognimed», the Complex for Simulation and Data Processing and the scientists at the Kurchatov Institute (Moscow).
        Speaker: Irina Enyagina (Kurchatov Institute)
        Slides
      • 17:20
        Realtime remote rendering of GPGPU accelerated Schrödinger's Smoke 15m
        This paper focuses on integrating a Schrödinger smoke calculator GPGPU into existing interactive content display systems. The architecture of the three technology-based visualization approaches is compared: - server-side rendering with ParaView - server-side rendering with NoVNC - local rendering on the client when processing model data on the server side. - local rendering and model computation on the client resources. The paper analyzes the quality of these approaches by analyzing the architecture, FPS, and latency of the created prototypes for various approaches that we present. Both server-side and client-side rendering were performed on the same model code base that utilizes CUDA 10.1 and ArrayFire for Schrödinger Smoke computation, while Unity3d and ParaView were utilized for rendering. The study showed how high utilization of coprocessors on the server side leads to complications in solving the problem of real-time display of information. Aspects of multi-user application of the created technology and related limitations were also considered. The prospects and problems of its development and integration into user applications are shown.
        Speaker: Mr Oleg Iakushkin (Saint Petersburg State University)
      • 17:35
        Multiagent information technologies in system analysis 15m
        Agent technologies currently play an increasingly important role in the information technology industry given its ability to learn and evolve, to solve information management problems, to employ data visualization and many other benefits. As a computer program, an agent deals with a challenge Internet users face every single day: to obtain reliable and effective data in the specific thematic field. Multiagent system consists of two or more autonomous agents and is aimed at solving complex problems, such as Big Data, Data mining, primary structured and unstructured information processing (including text, numbers and multimedia types of data). The paper deals with issues of multiagent technologies intended use by means of nuclear engineering example. More specifically, the creation of specialized agent programs operating in the interests of the user to collect data from information resources at existing nuclear plants, and processing agents that highlight key information for users.
        Speaker: Ms Vera Inkina (NRNU MEPhI)
      • 17:50
        Parallel Algorithms for investigation Josephson Junctions 15m
        Speaker: Dr Oksana Streltsova (JINR)
        Slides
      • 18:05
        An Interval-Valued Image Based Approach for Edge Detection 15m
        The ability to propagate the uncertainty information during image processing can be very important in different applications. Detecting edges are an important pre-processing step in image analysis. Best results of image analysis extremely depend on edge detection. Edge detectors are intended to detect and localize the boundaries or silhouettes of objects appearing in images. Up to now many edge detection methods have been developed. But it may have some weaknesses in correct detection of the scope of complications for aerial images or medical images, because of the high variation rate in these images. This paper introduces a verification framework to detect edges based on interval techniques using measuring diversity of pixel’s intensity and randomness of intensity distribution within the framework of information theory.
        Speaker: Mr Andrey Nechaevskiy (JINR)
    • 16:30 18:15
      Triggering, Data Acquisition, Control Systems Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      • 16:30
        The Joint Triggering and DAQ System of Experimental Complex NEVOD for Multicomponent Cosmic Ray Investigations 15m
        The Unique Scientific Facility NEVOD represents a large experimental complex including a number of setups for investigations of various components of cosmic rays in a wide range of energies and zenith angles. The main setup of the complex is the Cherenkov water detector (CWD) of 2000 m3 volume filled with a dense lattice of 91 quasi-spherical measuring modules. CWD can perform like a 4π hodoscope or calorimeter. Its joint operation with streamer tube coordinate-tracking detector DECOR (70 m2) allowed to be the first to measure the dependence of muon bundle energy deposit on the energy of primary cosmic rays that was an important step in understanding of “the muon puzzle”. The complex also includes an array of neutron detectors PRISMA, Calibration Telescope System, a new setup for study of extensive air showers NEVOD-EAS of 1000 m2 area, an URAN setup for investigations of hadron component of EAS, and new drift chamber installations for dense muon bundles studies. All of them are combined by a joint triggering system that unites their data and allows to match all events detected by separate sub-systems. The talk presents the design and main principles of joint operation of the Experimental Complex NEVOD and its future detectors.
        Speaker: Mr Egor Zadeba (National Research Nuclear University MEPhI (Moscow Engineering Physics Institute))
      • 16:45
        ATLAS Muon Trigger performance 15m
        Events containing muons in the final state are an important signature for many analyses being carried out at the Large Hadron Collider (LHC), including both standard model measurements and searches for new physics. To be able to study such events, it is required to have an efficient and well-understood muon trigger. The ATLAS muon trigger consists of a hardware based system (Level 1), as well as a software based reconstruction (High Level Trigger). Due to the high luminosity in Run 2, several improvements have been implemented to keep the trigger rate low, while still maintaining a high efficiency. Some examples of recent improvements include requiring coincidence of hits in the muon spectrometer and the calorimeter and optimised muon isolation. We will present an overview of how we trigger on muons, recent improvements, the performance of the muon trigger in Run-2 data and an outlook for the improvements planned for run-3.
        Speaker: Dr Antonio Policicchio (Sapienza Università di Roma &amp; INFN Roma)
        Slides
      • 17:00
        The ATLAS Run-2 Trigger Menu 15m
        The ATLAS experiment aims at recording about 1 kHz of physics collisions, starting with an LHC design bunch crossing rate of 40 MHz. To reduce the significant background rate while maintaining a high selection efficiency for rare physics events (such as beyond the Standard Model physics), a two-level trigger system is used. Events are selected based on physics signatures such as the presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multivariate methods to carry out the necessary physics filtering for the many analyses that are pursued by the ATLAS community. In total, the ATLAS online selection consists of around 1500 individual triggers. A Trigger Menu is the compilation of these triggers, it specifies the physics selection algorithms to be used during data taking, and the rate and bandwidth a given trigger is allocated. Trigger menus must reflect the physics goals for a given run, and also must take into consideration the instantaneous luminosity of the LHC and limitations from the ATLAS detector readout and offline processing farm. For the 2017-2018 run, the ATLAS trigger has been enhanced to be able to handle higher instantaneous luminosities and to ensure the selection robustness against higher average multiple interactions per bunch crossing. We will describe the design criteria for the trigger menu for Run 2. We discuss several aspects of the process of planning the trigger menu, starting from how ATLAS physics goals and the need for detector performance measurements enter the menu design, and how rate, bandwidth, and CPU constraints are folded in during the compilation of the menu. We present the tools that allow us to predict and optimize the trigger rates and CPU consumption for the anticipated LHC luminosities. We outline the online system that we implemented to monitor deviations from the individual trigger target rates, and to quickly react to the changing LHC conditions and data taking scenarios. Finally, we give an overview of the 2015-2018 Trigger Menu and performance, allowing the audience to get a taste of the broad physics program that the trigger is supporting.
        Speaker: Dr LIGANG XIA (UNIVERSITY OF WARWICK)
        Slides
      • 17:15
        Implementation of the ATLAS trigger within the multi-threaded AthenaMT framework 15m
        Athena is the software framework used in the ATLAS experiment throughout the data processing path, from the software trigger system through offline event reconstruction to physics analysis. The shift from high-power single-core CPUs to multi-core systems in the computing market means that the throughput capabilities of the framework have become limited by the available memory per process. For Run 2 of the Large Hadron Collider (LHC), ATLAS has exploited a multi-process forking approach with the copy-on-write mechanism to reduce the memory use. To better match the increasing CPU core count and the, therefore, decreasing available memory per core, a multi-threaded framework, AthenaMT, has been designed and is now being implemented. The ATLAS High Level Trigger (HLT) system has been remodelled to fit the new framework and to rely on common solutions between online and offline software to a greater extent than in Run 2. We present the implementation of the new HLT system within the AthenaMT framework, which will be used in ATLAS data-taking during Run 3 (2021-2023) of the LHC.
        Speaker: Rafal Bielski (CERN)
        Slides
      • 17:30
        FELIX: commissioning the new detector interface for the ATLAS trigger and readout system 15m
        After the current LHC shutdown (2019-2021), the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during the shutdown. A key goal of this upgrade is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange (FELIX) system has been developed. FELIX acts as the interface between the data acquisition; detector control and TTC (Timing, Trigger and Control) systems; and new or updated trigger and detector front-end electronics. The system functions as a router between custom serial links from front end ASICs and FPGAs to data collection and processing components via a commodity switched network. FELIX also forwards the LHC bunch-crossing clock, fixed latency trigger accepts and resets received from the TTC system to front-end electronics. FELIX uses commodity server technology in combination with FPGA-based PCIe I/O cards. FELIX servers run a software routing platform serving data to network clients. This presentation will cover the design of FELIX and the results of the installation and commissioning activities for the full system in summer 2019.
        Speaker: Nicolina Ilic (CERN)
        Slides
      • 17:45
        The automatic control system of 8th Phasotron tract 15m
        The JINR Phasotron is the basic research facility of the Laboratory of Nuclear Problems of JINR. In 1985, a clinical complex of proton therapy of cancer patients was created on the basis of the facility. For the tasks of the complex, the 8th Phasotron tract is used. The tract consists of 15 elements: 2 rotary electromagnets and 13 electromagnetic lenses controlled by an automatic control system. The system in automatic mode ensures the achievement and maintenance of the necessary modes of operation of the elements, and allows personnel to control the operation of the beam from two control stations. The report describes the current version of the automatic control system of the 8th Phasotron tract - the composition, operating principles, issues that arose during the implementation and their solution. Based on ICP-DAS industrial controllers, 3 types of control and stabilization units for motor-generators (and after upgrade – for inverter power sources) of elements of the 8th tract were developed and implemented. According to the results of the primary runs, the oily shunts were replaced with current sensors. To control the elements of the 8th tract through the control units, special software has been developed. The software can be ran on a PC running Windows or Linux and can be used to control the system from several locations at the same time. Software deployed in 2 control posts. The software allows, in automatic or manual mode, to achieve and maintain the required values of currents on the elements of the 8th tract, correct system measuring errors and signal problems.
        Speakers: Mr Andrey Yudin (JINR), Mr Vladimir Khalin (JINR)
      • 18:00
        Development of Web interactive 3D environment for event display in 'Muon g-2' (Fermilab) and 'MEG II' (PSI) experiments 15m
        There are many different ways to realize remote event viewer for detectors in physical experiments. Two different approaches for implementing of cross-platform 3D remote event display are discussed in details. Python, VTK Tools, Matplotlib and additional libraries were used to implement the event display application in 'Muon g-2' experiment (2017). Modern Javascript, NodeJS, WebGL, ThreeJS and React-Redux Framework were chosen for the 'MEG II' experiment (2019). The advantages and disadvantages of the two ways are investigated in the report. Cross-platform compatibility issues and the creation of a convenient software development environment are also discussed.
        Speaker: Mr Viktor Krylov (Joint Institute for Nuclear Research (JINR))
    • 18:15 18:45
      Students school closing Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

    • 21:00 23:30
      CONFERENCE DINNER Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

    • 09:00 11:00
      Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.) Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr VIACHESLAV ILIIN (NRC Kurchatov Institute)
      • 09:00
        Automation of (big) data processing for scientific research in heterogeneous distributed computing systems. Lessons of BigPanDA project 15m
        Increasing the number of big scientific projects which requires the processing of enormous amount of data in the last decade encouraging the computing community to find new solutions for data processing. BigPanDA project established in 2013 was devoted to the research of the possibility to have common solutions which will allow transparent usage of distributed heterogeneous computing resources by a wide range of scientific communities. In this talk overview of BigPanDA project and results of research and developments will be presented. A special point will be devoted to how achievements of BigPanDA project will be used for supporting of automation of data processing for the next generation of JINRs experiments.
        Speaker: Mr Danila Oleynik (JINR LIT)
        Slides
      • 09:15
        Computing Resource Information Catalog: a unified information framework for LHC distributed computing and beyond 15m
        The Worldwide LHC Computing Grid infrastructure (WLCG) connects together compute and storage resources offered by about 200 computing centers affiliated with research institutes participating in the LHC scientific program. The main mission of such global collaboration is to provide computing and storage capacity to perform petabytes scale data processing and physics analysis. Following increasing demands for processing and storage resources, the experiments complement pledged resources provided by the WLCG infrastructure with opportunistic resources such as cloud platforms, HPC and volunteer computing. They also integrate new data storage technologies. In order to be effectively used all these heterogeneous distributed resources should be well described, configured and integrated with high level experiment oriented middleware applications and frameworks. This contribution describes a high-level information middleware, the Computing Resource Information Catalog (CRIC) which provides reliable and complete topology and configuration description for a large scale distributed heterogeneous computing infrastructure. The system aims to facilitate distributed computing operations for the LHC experiments and consolidate WLCG topology information. CRIC aggregates information coming from various low-level information sources and complements the topology description with experiment-specific settings required by the LHC experiments in order to exploit computing resources. Being an experiment-oriented but still experiment-independent information middleware, CRIC offers a generic solution, which can be successfully applied on the global WLCG level, for a particular LHC experiment, for instance, CMS or ATLAS, or even for a special task. The overall plugin-based architecture of CRIC is presented showing how CRIC components could be adopted and customized to serve various needs of a given experiment. The paper also discusses recent developments and ongoing implementation of the universal CRIC solution for the description of generic distributed infrastructure.
        Speaker: Mr Alexey Anisenkov (BINP)
      • 09:30
        AstroDS — distributed storage for large astroparticle physics facilities 15m
        Currently, a number of experimental facilities in the field of particle astrophysics of the mega-siences class are being built and are already operating in the world. An important feature of this class of projects is the huge flow of data produced, the participation of many organizations and, as a result, the distributed nature of data processing and analysis. To meet similar requests in high energy physics, a WLCG grid was deployed as part of the LHC project. This solution, on the one hand, showed high efficiency, and, on the other hand, it turned out to be a rather heavy solution that requires high administrative costs, highly qualified staff and a very homogeneous environment on which applications operate. The paper considers the architecture of a distributed storage for astrophysical experiments — AstroDS using the example of the KASCADE and TAIGA experiments. The main ideas of the proposed approach are as follows: • unification of access to local storage data without changing their structure based on corresponding adapter modules; • use of local data access policies; • data transfer only at the moment of actual access to them; • search and aggregation of the necessary data on user requests to metadata. A feature of the system is its orientation of storing source data as well as primary processed data, for example, data after calibration, using the Write-One-Read-Many method. Adding data to local repositories should be done through special local services that provide, among other things, semi-automatic collection of meta-information in the process of downloading new data. At present, a prototype of the system has been deployed on the basis of the SINP MSU, which develop the technology of building distributed storages for particle astrophysics. This work was supported by the RSF Grant No. 18-41-06003.
        Speaker: Dr Alexander Kryukov (SINP MSU)
      • 09:45
        Realistic simulation of the MPD Time Projection Chamber with Garfield 15m
        The detailed simulation of electron drifting in the MPD TPC was made with CERN Garfield toolkit for the simulation of gas particle detectors. For electron transporting were used the 10% Ar + 90% CH4 gas mixture with impact of corresponding magnetic and electric fields from MPD TPC Technical Design Report (rev. 07). Ionization processes were investigated in the wire planes area near Read-Out Chambers of the TPC. The Read-Out Chambers were modeled with different gaps to gate grid and some values of gate voltage.
        Speaker: Mr Alexander Bychkov (LHEP)
        Slides
      • 10:00
        The Visualization Method Pipeline for the Application to Dynamic Data Analysis 15m
        The new era of scientific research brings an enormous amount of data for scientists. These complex and multidimensional data structures are used for the verification of scientific hypothesis. Exploring such data by researchers requires the development of new technologies for its efficient processing, investigation and interpretation. Intellectual data analysis and statistical methods are rapidly developing, and this is where visualization methods are getting their place. This work describes mathematical basis of the developed visualization tool for the analysis of multidimensional dynamic data. This tool provides the pipeline of methods, which combined, allow to cope with a set of practical tasks (anomalies detection, cluster, trends and variation analysis) using visualization method. Authors provided mathematical models of geometrical operations under the data domain, algorithms for solving the mentioned classes of tasks and several use-cases with technological and economical data based on visualization method.
        Speaker: Mr Timofei Galkin (NRNU MEPhI)
        Slides
      • 10:15
        Containerized services for FEL data processing 15m
        Modern Free Electron Laser (FEL) facilities generate huge amounts of data and require sophisticated and computationally expensive analysis. For example, recent experiments at European XFEL have generated more than 360 Tb of raw data in five days. Efficient analysis of this data is a challenging task which requires productive use of existing methods and software for data analysis over scalable computing infrastructure. Additional challenge is that different pieces of software are optimized for diverse computing architectures (parallel MPI computing, GPU, SMP and etc) and require various software environments. In this report we present our experience of setting up experimental data analysis workflow in containerized computing infrastructure. We take individual software packages for certain analysis steps to set up loosely coupled virtualized microservices and use Kubernetes software to orchestrate multiple containers as a scalable data processing workflow. This approach brings us flexibility in setting up software environments inside containers and allows easy parallelization of data processing.
        Speaker: Mr Anton Teslyuk (NRC "Kurchatov Institute")
        Slides
      • 10:30
        Hit Reconstruction Improvement in the Cathode Strip Chambers of the CMS Experiment 15m
        The reconstruction of charged particle trajectories in the CMS endcap muon system is based on hits detected by the Cathode Strip Chambers (CSCs). The reconstruction procedure for these multilayer detectors can be divided into two main parts: the reconstruction of hits on each layer, and the assembly of track segments within the chambers from the reconstructed hits. At the HL-LHC the increased luminosity implies higher muon and background rates which, without improvement of the existing hit reconstruction algorithm, may deteriorate the present performance of the CSC system. On one hand, the increasing hit rates will require a better precision in the identification of two or more particles that pass very close to each other. On the other, upgraded readout electronics for the CSCs provide options for improved reconstruction which have not yet been fully exploited in the offline software. Some proposed solutions for these issues, together with figures comparing the standard and improved reconstruction algorithms, are presented here.
        Speaker: Mr Nikolay Voytishin (LIT)
        Slides
      • 10:45
        Architecture of the computing system for experiments with large amount of data streams 15m
        The emergence of a new series of experiments in nuclear physics with large amount of data streams requires a review of the general idea of computing. The well-established concept of LHC data processing involves a huge amount of data in which rare events need to be highlighted. Such an approach is determined by the physics of the phenomenon under study with low densities and high energies. New experiments are aimed at a different physics, when the energy is not so high, and the density is much higher. This generates a huge data stream that needs to be processed entirely. Selecting tools for working with large amounts of data is a separate task for the development team. Not infrequently, the architecture had to be extremely drastically changed because of increased data loads and control of stored data was lost and the collection of statistics became more and more difficult. There is a need for a solution that allows not only to store all sorts of information with the ability to download from different sources but also has a set of tools to analyze the collected information (Big Panda, Informatica, etc.). A data lake is a concept, an architectural approach to centralized storage that allows you to store all structured and unstructured data with the possibility of unlimited scaling. A data lake can store structured data from relational databases (rows and columns), semi-structured data (CSV, journals, XML, JSON), unstructured data (emails, documents, PDF files) and binary data (images, audio, video). Quite popular is the approach in which incoming data is converted into metadata. This allows you to store data in its original state, without special architecture or the need to know which questions you may need to answer in the future, without the need to structure the data and have various types of analytics - from dashboards and visualizations to big data processing, real-time analytics and machine learning to make the right decisions. We believe that this technology is well suited as a basis for new experiments computing. As a result of the analysis of existing solutions, the following functional modules were identified that are the most necessary and need to be developed in the universal solution: • Storage for all data with the ability to create separate storage for hot/cold data, for ever-changing data or to handle fast streaming • Security module • Databases for structured data • The module of tools for working with data (analysis, data engines, dashboards, etc.) • Machine learning module • Services for the development of add-ons, modifications and deployment of storage
        Speaker: Prof. Alexander Degtyarev (Professor)
    • 09:00 11:00
      Innovative IT Education Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Nadezhda Tokareva (Dubna Univeristy)
      • 09:00
        Concept for the development of a digital platform for education at Dubna University 15m
        The article is devoted to e-learning technologies and adaptive educational technologies for training specialists on the basis of a digital platform. The formation of a digital educational environment is a strategic government task. Currently, Russia is implementing a number of projects aimed at creating the necessary conditions for the development of the digital economy. To prepare competent personnel for the digital economy, it is necessary to modernize the education and training system, introduce digital tools for educational activities and incorporate them into the information educational environment. Such promising areas of digital technologies as: machine learning, big data analytics, quantum computing, as well as a systematic approach to the formulation of subject problems, mathematical and software, information security and other advanced technologies today are included in the curriculum of students in University "Dubna". Important principles of the digital platform in education are such as meeting the changing needs of employers; the formation of individual learning paths; ensuring the security of data exchanged between users of the platform; interaction with partners who will be involved in the design and implementation of training programs. Key words: E-learning, digital platform, system approach
        Speaker: Mrs Evgenia Cheremisina (Dubna International University of Nature, Society and Man. Institute of system analysis and management)
        Slides
      • 09:15
        Methodology and technology of e-learning at Dubna University 15m
        The article is devoted to the basic approaches to the organization of e-learning at the Dubna University. The priority project “Modern Digital Educational Environment in the Russian Federation” is intended to create conditions for systematic quality improvement and expansion of continuing education opportunities for all categories of citizens through the development of the Russian digital educational space. To achieve this goal, it is necessary to introduce e-learning technologies, online learning, mass open online courses. In this regard, the modernization of vocational education is required, including through the introduction of adaptive, practice-oriented and flexible educational programs of study. For more than 15 years, the university has been engaged in continuous training of students in a number of areas in the field of information technology and programming based on the use of distance learning technologies. To increase students' motivation to study, Dubna University harmoniously combines modern educational technologies based on the inclusion in the educational process of open educational resources, e-learning, distance learning technologies, webinars, etc., as well as learning elements that enable one to gain practical skills: workshops, internships, project training hikes, etc. Leading experts in the field of information technology, programming, information security, big data analyst, quantum computing, etc. take part in preparing students. Key words: E-learning, educational resources, project-based learning approaches
        Speaker: Mrs Oksana Kreider (Крейдер Оксана)
        Slides
      • 09:30
        Heterogeneous IT platform “HybriLIT” for organizing the educational process on the basis of the International IT School “Data Science” 15m
        The International School on Information Technologies “Data Science” was created at the State University “Dubna”. It ensures training of IT specialists for the development of computing of megaprojects (NICA, PIC, LHC, FAIR, SKA, etc.), Big Data analytics (Data Science), digital economy and other promising directions. Educational programs of the International IT School are formed taking into account personnel needs of the Joint Institute for Nuclear Research (JINR). The program includes such subjects as “Mathematical apparatus and tools for data analysis”, “Technologies and platforms for distributed and parallel computing”, “Big Data analytics”. Training in these disciplines is held using the “HybriLIT” heterogeneous platform, which is part of the Multifunctional Information and Computing Complex (MICC) of the Laboratory of Information Technologies (LIT) JINR. The heterogeneous platform consists of the “HybriLIT” education and testing polygon and the “Govorun” supercomputer, which are combined with the unified software and information environment. Training courses on parallel programming technologies and hybrid technologies are held on the basis of the “HybriLIT” polygon. An ecosystem for tasks of machine learning and deep learning (ML/DL) and data analysis was created on the “HybriLIT” platform to study ML/DL methods, develop mathematical models and algorithms and carry out resource-intensive calculations including on graphics accelerators, which can significantly reduce the calculation time. The software and information environment was elaborated and is actively developed for the most efficient use of cluster resources. It includes the website, the service Indico, the service GitLab and others, which are used in the educational process to increase the efficiency of interaction with students. Using the “HybriLIT” education and testing polygon allows students to master novel IT solutions and technologies that will be included in the curricula of higher educational institutions only in the future.
        Speaker: Daria Priakhina (LIT JINR)
        Slides
      • 09:45
        Methodical aspects of training data scientists using the Data GRID in a virtual computer lab environment 15m
        Today it is crucial to train Data Scientists that serve as the bridge between cutting-edge technology and digital economy needs. It is essential to teach them to improve access to Big Data, analytics tools, innovative research methods. They should be able to design and deploy Data GRID clusters use and advise on such tools as machine learning, natural language processing, web scraping, big data platforms, and data visualization techniques. Virtual Computer Lab (VCL) provides a set of software and hardware-based virtualization and containerization tools that enable the flexible and on-demand provision and use of computing resources. The central methodical aspect of the VCL is the principle of self-organization, that makes the transition from a complex system of granular group security policies with a large number of restrictions to the formation of personal responsibility and respect for colleagues, which should be a solid foundation for strengthening and developing classical cultural values in the educational environment. Education in the VCL with integrated Knowledge Management System is the process of facilitating learning, or the acquisition of knowledge, skills, values, beliefs, habits. Educational methods include storytelling, discussion, teaching, training, directed research. Technology enhances relationships between teachers and students. When teachers effectively integrate technology into subject areas, teachers grow into the roles of adviser, content expert, coach. Technology helps make teaching and learning more meaningful and fun. Using VCL, students learn to design and deploy a Data GRID cluster based on Apache Hadoop, perform basic cluster administration tasks, upload real-world data. Based on the uploaded data, they study the main components of the cluster and the essential analytics tools. VCL allows us to train Data Scientists who can productively solve actual business and scientific problems in the field of Big Data. Data Scientists are integral to supporting both leaders and developers in creating better products and paradigms. Also, as their role in big business becomes more and more important, they are in increasingly short supply.
        Speaker: Nadezhda Tokareva (Dubna Univeristy)
        Slides
      • 10:00
        Information Security Issues in a Distributed Computing Educational Environment 15m
        The intensive development of computing technology and distributed computing systems technologies has led to the emergence of new active and interactive forms of education in which students have the opportunity of wide access to electronic educational resources. At the same time, new threats and vulnerabilities have emerged, which can be classified into 3 groups: integrity and confidentiality of information in a distributed computing educational environment, protection of intellectual property of the electronic educational resources, and security of the learning management system. The report discusses the mechanisms for ensuring information security in a distributed computing educational environment. Keywords: distributed computing environment, education, information security, electronic educational resources
        Speaker: Nadezhda Tokareva (Dubna Univeristy)
        Slides
      • 10:15
        Virtual Laboratory – virtual educational tools and hands-on practicum 15m
        Experiments have always been an integral part of the experimental sciences, and are one of the most effective ways to get first-hand knowledge about certain concepts and principles in a study field such as nuclear physics. The Virtual Lab project (VLab) has a history of several years and now project results are used in the educational process universities in 13 countries. The first stage of the project was devoted to creation of the Virtual Laboratory of Nuclear Fission (www.v-labs.ru). Currently the project is developing in three directions: – Virtual laboratory of gamma spectroscopy – Laboratory of detectors and signal processing. Laboratory of data analysis in ROOT – Preparation and conduct of hands-on practicums for university and high school students In the framework of the VLab project several hands-on practices were successfully held for university and high school students from different countries. During the practices students started their work with signal generators, oscilloscopes, coincidence circuits, scintillation counters, and finished assembling a simple scintillation telescope that allowed them to register cosmic radiation particles. Then, under supervision of young scientists, students worked with gamma-, X-ray and light ion spectrometers. Attention was given to the analysis of experimental data. We are very interested to collaborate with teachers and scientists from the JINR Member States and Associate Members to develop the VLab project.
        Speaker: Mrs Kseniia Klygina (Joint Institute for Nuclear Research, InterGraphics LLC)
        Slides
      • 10:30
        Elective course “Nuclear Physics” for high school students – synthesis of traditional textbook with the modern computer tools 15m
        One of the ways to develop the school education in Russia is the proposal to introduce elective courses in priority development areas as pre-profile training. These courses are encouraged to be used in extracurricular activities. Over the year, an elective course “Nuclear Physics” was developed for high school students, including a traditional textbook, a computer application and various interactive educational materials and 3D models available on mobile devices using QR codes. The main idea of the course can be formulated as “From Nuclear Physics to Nuclear Technologies”. The elective course includes not only basic laws of nuclear physics, but also the application of these laws in nuclear astrophysics, in the synthesis of new elements, nuclear energy, nuclear medicine, ecology and radiobiology. Each chapter in the textbook provides links to additional digital materials: – video lectures, – examples of problem solving, – additional materials for advanced study. At the end of the textbook you can also find the references to a virtual practicum on nuclear physics, final tests, and approximate research and project works.The textbook pays special attention to the modern achievements of nuclear physics. It contains information of modern international experiments, conducted at JINR and other international scientific centers.
        Speaker: Mrs Nataliya Vorontsova (Joint Institute for Nuclear Research (JINR))
        Slides
      • 10:45
        JINR educational portal («edu.jinr.ru») — open educational resources and modern visualization tools for training of young professionals for research projects 15m
        Today open online courses have shown their effectiveness for further education in various fields. We have created new courses devoted to JINR research projects: NICA/MPD, SHE Factory, applied researches with heavy ions and neutrons. The open educational portal of JINR is being developed for university students of the JINR Member States and Associate Members, young specialists and science teachers. The portal hosts the MOOC format courses connected with priority JINR activities. The portal also contains links to digital materials, that overview the basic JINR physical facilities using 3D modeling tools, as well as to the Virtual Laboratory project devoted to the experimental nuclear physics. Another section of the portal – “Scientists for schools” which could be used as additional educational materials for the course of school physics. The materials of the portal could be used for student trainings before and during the JINR student practices, to train students for their research work and for conducting the specialized courses at the universities of JINR Member States and Associate Members.
        Speaker: Ms Victoria Belaga (JINR, Dubna University)
        Slides
    • 11:00 11:20
      Coffee break 20m
    • 11:20 13:05
      Computing for Large Scale Facilities (LHC, FAIR, NICA, SKA, PIC, XFEL, ELI, etc.) Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr Tatiana Strizh (JINR)
      • 11:20
        Critical exponents of directed percolation universal class: Three-loop approximation 15m
        Directed bond percolation problem is an important model in statistical physics. It provides a paramount example of non-equilibrium phase transitions. Up to now its universal properties are known only to the second-order of the perturbation theory. Here, our aim is to put forward a numerical technique with critical exponents of directed percolation universality class can be calculated to the higher orders of perturbation theory. It is based on the perturbative renormalization scheme in $\varepsilon$, where $\varepsilon = 4-d$ is a deviation from the upper critical dimension. Within this procedure anomalous dimensions are evaluated in two different subtraction scheme: Minimal subtraction scheme and null-momentum scheme. Numerical evaluation of integrals has been done using Vegas algorithm from CUBA library. The final results are compared with analytic calculation in two-loop approximation and Monte Carlo simulations.
        Speaker: Mr Lukas Mizisin (Institute of Experimental Physics, SAS, Kosice, Slovakia)
      • 11:35
        System of safe data transmission from the she-factory dc-280 10m
        The article presents a scheme of the data transmission network providing the commissioning of the DC-280 accelerator complex. The main indicators of communication channel characteristics are given. Discusses the calculations settings of network devices that provide secure access to network resources. Shown settings for transmission of unicast and multicast packets and IPv4 protocol. An authorization scheme and storage system for the entire switch configuration sequence is presented. The monitoring system of the network is considered and the payload of links based on SNMP is shown. Given the forecast for future utilization of communication unblocked channels. Detail description of the necessity to use DHCP snooping functionality is given. A system of wireless access to a local computer network has been developed. In conclusion, a summary is given and a forecast is given on the future development of the LAN of the accelerator complex, as well as the backbone of the communication links. Keywords: Network, Monitoring, Authorization.
        Speaker: Mr Andrey Baginyan (ccnp)
        Slides
      • 11:45
        Geometry Database for the CBM Experiment 10m
        The last improvements and updates of the Geometry Database (Geometry DB) for the CBM (Compressed Barionic Matter) experiment are described. The geometry DB is an information system that supports the CBM geometry. The main purpose of Geometry DB is to provide the storage of the CBM geometry, to supply the convenient tools for managing the geometry modules assembling various versions of the CBM setup as a combination of geometry modules and additional files. There has to be the functionality to support the various versions of the CBM setup. The ability of setup versions is added. The set of corresponding tools both the graphical user interface (GUI) and application programming interface (API) was implemented. The users are not always required the regular updates of the local CBMRoot from version control system to solve their tasks (such as simulation or reconstruction). The main goal of the new functionality is automatic selection of the setup version during the loading of the geometry which corresponds to the current user environment.
        Speaker: Irina Filozova (JINR)
        Slides
      • 11:55
        Stress computation in a sphere with surface defects 10m
        In this paper, we consider the problem of calculating the stresses of a sphere with surface defects for a set of different initial conditions of the problem at hand. Varying materials, size and shape of defects are considered. We perform computations in a system that we developed form open source components that combine CAD and CAE functions inside one user interface. We compare the operation of the system in comparison to some of its commercial counterparts with respect to the complexity of the work, expressed in the number of necessary user actions. Also we compare the operational capabilities in relation to the geometry processing - the maximum number of polygons and support for co-processors. A parametric approach to the creation and processing of geometry in combination with an ecosystem that provides FEM tools for working with GPGPU allows for minimization of the amount of time needed to test an engineering idea under widely varying initial and boundary conditions. This paper shows change of the time required for calculation of the task depending on the boundary conditions and the complexity of the geometry, presents the main stress points in the load of the system performing the task. For the end user experience, the system provides an interface that combines interactive web components based on the Jupyter Notebook platform and a programming environment based on the Python language. The system is opensource and can be deployed on to any Linux compatible system thanks to Docker containerization technology.
        Speaker: Ms Olga Sedova (Saint-Petersburg State University)
      • 12:05
        Zero-Knowledger Proof in Self-Sovereign Identity 15m
        This article provides an overview of the currently existing technologies in the field of Self-Sovereign Identity. Special attention is paid to the zero-knowledge proof and how it can be used in distributed ledgers technologies. The work shows how to make a new user anonymous, but at the same time provide him with all the features without decreasing the level of trust to him. It will be the same as if he was fully known to the system. Particular attention is paid to the ability of users to provide access to each other's resources without losing security. The algorithms of how it is done is presented.
        Speaker: Ms Nataliia Kulabukhova (Saint Petersburg State University)
        Slides
      • 12:20
        Development and Integration of the Electronic Logbook for the BM@N experiment at NICA 15m
        The acquisition of experimental data is an integral part of all modern high-energy physics experiments. During experiment sessions, not only the data collected from the detectors are important for understanding the produced events, but also the records in logbooks that are written by the shift crew and describe operating modes of various systems and detectors and different types of events. The report shows a new electronic logbook developed to automate the latter process in the BM@N experiment, a fixed target experiment of the first stage of the NICA project at the Joint Institute for Nuclear Research. The online electronic logbook allows collaboration members during experiment runs to record information on current events, states of various systems, operation conditions of detectors and many others which are further used in the processing and physics analysis of the particle collision events. The system provides users with tools for convenient viewing, transparent managing and searching for the required information in the logbook. The specialized Web-interface and application programming interface for storing and accessing these data are considered. The important task of integrating the online electronic logbook with the central experiment database is also shown. The implementation of such information system is a necessary step for the successful future operation of the BM@N experiment.
        Speaker: Dr Konstantin Gertsenberger (JINR)
        Slides
      • 12:35
        Data Knowledge Base: metadata integration system for HENP experiments 15m
        HENP experiments, especially the long-living ones like the ATLAS experiment at the LHC, have a diverse and evolving ecosystem of information systems that help scientists to organize research processes -- such as data handling (including data taking, simulation, processing, storage, and access), preparing and discussion of publications, etc. With time all the components of the ecosystem grow, develop into complex structures, accumulate metadata and become more independent and less flexible. Automated information integration becomes a pressing need for effective operating within the ecosystem. This contribution is dedicated to the meta-system, known as Data Knowledge Base (DKB), designed to integrate information from multiple independent sources and provide fast and flexible access to the integrated knowledge. Over the last two years, the system is being successfully integrated with the production system of the ATLAS experiment, including the extension of the production system web-interface with functionality built upon the unified metadata provided by DKB.
        Speaker: Mrs Marina Golosova (National Research Center "Kurchatov Institute")
        Slides
      • 12:50
        Data streams processing in metadata integration system for HENP experiments 15m
        Nowadays, heterogeneous metadata integration has become a widespread objective. Whenever it is addressed, there are numerous tasks to be solved, such as data sources analysis and storage schema development. No less important one is the development of automated, configurable and highly manageable ETL (data Extraction, Transformation, and Load) processes, as well as the creation of tools for their automatization, scheduling, management, monitoring. This work describes the Metadata Integration and Topology Management System, initially designed as a subsystem of the Data Knowledge Base (DKB) developed for the ATLAS experiment. The core idea of the subsystem is to separate the common features of the majority of ETL-processes from the implementation of particular tasks. It is implemented as standalone modules: supervisor and workers; a supervisor is responsible for data streams building through workers that implement a set of specific operations for a particular process. The system is intended to considerably facilitate the organizing of ongoing data integration operations with automated data stream processing.
        Speaker: Anastasiia Kaida (National Research Tomsk Polytechnic University, School of Computer Science & Robotics)
        Slides
    • 11:20 13:05
      Innovative IT Education Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Splendid Conference & SPA Resort, Conference Hall Petroviċa

      Convener: Nadezhda Tokareva (Dubna Univeristy)
      • 11:20
        The features of the specialists training in electronics design «Megasaience» 15m Conference Hall Petroviċa (Montenegro, Budva, Becici)

        Conference Hall Petroviċa

        Montenegro, Budva, Becici

        Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
        Speaker: Dr Iurii Sakharov (Dubna International University for Nature,Society and Man)
    • 13:05 13:30
      Closing Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Splendid Conference & SPA Resort, Conference Hall Baltšiċa

      Convener: Dr Tadeusz Kurtyka (CERN)