SCIENCE BRINGS NATIONS TOGETHER
Montenegro, Budva, Becici, 25 September - 29 September 2017

Europe/Podgorica
Conference Hall (Montenegro, Budva, Becici)

Conference Hall

Montenegro, Budva, Becici

Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
Description
Welcome to NEC’2017!

On 25 September – 29 September, 2017, Montenegro (Budva), will host the regular JINR 26th Symposium on Nuclear Electronics and Computing - NEC'2017. The symposia have been held since 1963.

For the ninth time the organizers of the Symposium are JINR and CERN. The Symposium attendees will be leading specialists in the field of advanced computing and network technologies, distributed computing as well as GRID and cloud computing and nuclear electronics.

All previous forums of this series were highly appreciated at their true value by the leading specialists and companies involved.

The organizers of the NEC symposia traditionally paid particular attention to young scientists and specialists. The previous NEC conferences attracted an impressive number of such attendees which reached 35% of the total number of participants.

In the year of 2011, 2013 and 2015, within the scope of the symposium, organized were students’ schools on advanced information technologies, each being attended by almost 80 students from different countries. In 2017 the tradition is continued.

Chairpersons
Vladimir Korenkov, JINR
Ian Bird, CERN


IBS  niagara
jet dellemc

Book of Abstracts
First Announcement
Preliminary Program
Program
Participants
  • Aleksey Savelev
  • Alexander Avrorin
  • Alexander Bychkov
  • Alexander Olshevskiy
  • Alexandr Dmitriev
  • Alexandre Karlov
  • Alexei Klimentov
  • Alexey Anisenkov
  • Alexey Bugrov
  • Alexey Perevozchikov
  • Alexey Voinov
  • Anastasiya Koltyukova
  • Anatoly Zarubin
  • Andrei Tsaregorodtsev
  • Andrey Dolbilov
  • Andrey Kiryanov
  • Andrey Nechaevskiy
  • Andrey Sheshukov
  • andrey shevel
  • Artem Petrosyan
  • Callum Kilby
  • Danila Oleynik
  • Daria Stankus
  • Dario Barberis
  • Dirk Duellmann
  • Dmitrii Monakhov
  • Dmitriy Ponkin
  • Dmitry Egorov
  • Dmitry GARANOV
  • Dmitry Kamanin
  • Dmitry Podgainy
  • Elena Kirpicheva
  • Elena RUSSAKOVICH
  • Elena Tuzhilkina
  • Emma Torro
  • Evgenia Cheremisina
  • Evgenii Kuzin
  • Evgeny Molchanov
  • Evgeny Tushov
  • Fedor Pavlov
  • Fernando Barreiro Megino
  • Gennady Ososkov
  • Georgy Sedykh
  • Giuseppe Francesco TARTARELLI
  • Haibo YANG
  • Hong Su
  • Hongyun Zhao
  • Hristo Nazlev
  • Igor Golutvin
  • Igor Pelevanyuk
  • Igor Semenushkin
  • Ilnur Gabdrakhmanov
  • Ilya Shirikov
  • Irina Filozova
  • Iurii Sakharov
  • Ivan Kadochnikov
  • Ivan Vankov
  • Jack Wells
  • Jingyan Shi
  • Jingzhe Zhang
  • Julia Andreeva
  • Kaushik De
  • Kenneth Herner
  • Konstantin Gertsenberger
  • Leo Schlattauer
  • Levente Hajdu
  • Maarten Litmaath
  • Marcus Morgenstern
  • Maria GRIGORYEVA
  • Marina Golosova
  • markus schulz
  • MARTIN VALA
  • Maxim Karetnikov
  • Mikel Eukeni Pozo Astigarraga
  • Mikhail Borodin
  • Mikhail Titov
  • Milos Lokajicek
  • Mohammad Al-Turany
  • Nadezhda Tokareva
  • Nataliya Boklagova
  • Nicolai Iliuha
  • Nikita Balashov
  • Nikita Belyaev
  • Nikolay Gorbunov
  • Nikolay LUCHININ
  • Oksana Kreider
  • Oksana Streltsova
  • Oleg Rogachevskiy
  • Oleg Samoylov
  • Olga Kovaleva
  • Olga Rumyantseva
  • Patrick Fuhrmann
  • PAVEL DOHNAL
  • Pavel Goncharov
  • Petr Jancik
  • Petr Vokac
  • Petr Zrelov
  • QIANSHUN SHE
  • Robert Wolff
  • Rozaliia Matveeva
  • Ruslan Smelyanskiy
  • Sarah Demers
  • Savanna Shaw
  • Sergei Baidali
  • Sergei Gerassimov
  • Sergey Belov
  • Sergey Sidorchuk
  • Stanislav Pakulyak
  • Stepan Vereschagin
  • Svetlana Murashkevich
  • Tatiana KORCHUGANOVA
  • Tatiana Strizh
  • Tatiana Zaikina
  • Vadim Babkin
  • Vadim Bednyakov
  • Vadim Kochetov
  • Valeriy Egorshev
  • Valery Mitsyn
  • Vasilii Shvetcov
  • Vasily ANDREEV
  • vasily velikhov
  • Viacheslav Samarin
  • Victor Matveev
  • Victor Pilyugin
  • Victor Rogov
  • Victoria Belaga
  • Victoria Tokareva
  • Vitalii Aleinikov
  • Vitaly Antonenko
  • Vitaly Shutov
  • Vladimir Dimitrov
  • Vladimir Dobrynin
  • Vladimir Drozdov
  • Vladimir Elkin
  • Vladimir Karjavine
  • Vladimir Korenkov
  • Vladimir Yurevich
  • Weidong Li
  • Yannick LEGRÉ
  • Yaroslav Tarasov
  • Yelena Mazhitova
  • Yuri Minaev
  • Yury Panebrattsev
  • Yury Tsyganov
Support
    • Registration Splendid Conference & SPA Resort

      Splendid Conference & SPA Resort

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Welcome speeches Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Vladimir Korenkov (JINR)
      • 1
        Welcome from Montenegro officials Conference Hall

        Conference Hall

        Montenegro, Budva, Becici

        Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      • 2
        Welcome from Organizing Committee Conference Hall

        Conference Hall

        Montenegro, Budva, Becici

      • 3
        Welcome from Sponsors Conference Hall

        Conference Hall

        Montenegro, Budva, Becici

        Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Vladimir Korenkov (JINR)
      • 4
        Scientific Program of JINR
        Speaker: Dr Vadim Bednyakov (JINR)
        Slides
      • 5
        Opening welcome from CERN
        Speaker: Dr Tadeusz Kurtyka (CERN)
        Slides
      • 6
        Status and Future of WLCG
        The LHC science program has utilized WLCG, a globally federated computing infrastructure, for the last 10 years to enable its ~10k scientists to publish more than 1000 physics publications in peer reviewed journals. This infrastructure has grown to provide ~750k cores, 400 PB disk space, 600 PB of archival storage, as well as high capacity networks to connect all of these. Taking 2016 as a reference, the community processed roughly 10 Trillion collision events, often requiring multiple runs across parts of the primary data. Naïve projections from current practice to the HL-LHC data volumes, taking into account Moore’s law cost reductions of 10-20% per year, predict that computing hardware needs will exceed a flat hardware budget scenario by a factor 10-25. To achieve an efficiency gain at such a scale the community is rethinking the overall LHC computing models. These also have to enable the efficient use of new technologies and take into account the changes in the way computing resources can be provisioned. The presentation will cover the evolution of WLCGs and the current status of the discussion of future computing models.
        Speaker: Dr Markus schulz (on behalf of WLCG)
        Slides
    • 11:30
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Tadeusz Kurtyka (CERN)
      • 7
        JINR computing infrastructure Conference Hall

        Conference Hall

        Montenegro, Budva, Becici

        Speaker: Dr Vladimir Korenkov (JINR)
        Slides
      • 8
        The FabrIc for Frontier Experiments Project at Fermilab: Computing for Experiments Conference Hall

        Conference Hall

        Montenegro, Budva, Becici

        Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
        The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC experiments at Fermilab. The FIFE project enables close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing scope and physics area of study. The project also tracks and provides feedback on the development of common tools for job submission, identity management, software and data distribution, job monitoring, and databases for project tracking. The computing needs of the experiments under the FIFE umbrella continue to increase, and present a complex list of requirements to their service providers. To meet these requirements, recent advances in the FIFE toolset include a new identity management infrastructure, significantly upgraded job monitoring tools, and a workflow management system. We have also upgraded existing tools to access remote computing resources such as GPU clusters and sites outside the United States. We will present these recent advances, highlight the nature of collaboration between the diverse set of experimenters and service providers, and discuss the project's future directions.
        Speaker: Dr Kenneth Herner (Fermi National Accelerator Laboratory)
        Slides
    • 13:00
      LUNCH Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Markus Schulz
      • 9
        GRID and Cloud Computing at IHEP in China
        The distributed computing system at Institute of High Energy Physics (IHEP), Chinese Academy of Sciences, was firstly built based on DIRAC in 2013 and put into production in 2014. This presentation will introduce the development and latest status of this system: the DIRAC-based WMS was extended to support multi-VO scheduling based on VOMS; the general-purpose task submission and management tool was developed to ease the process of bulk submission and management of experiment-specific jobs with modular designs and customized workflow; To support multi-core jobs, different multi-core job scheduling methods have been tested, and their performance have been compared; To monitor and manage the heterogeneous resources in a uniform way, the resources monitoring and automatic management system has been implemented based on the Resource Status Service of DIRAC. The cloud computing provides a new way for high energy physics applications to access a shared pool of configurable computing resources. Based on the requirements from our domestic experiments, IHEP launched a cloud computing project, IHEPCloud, in 2014. This presentation will also introduce the status of IHEPCloud and some ongoing R&D work including resource scheduler based on the affinity model, integration of SDN with OpenStack to achieve configuration flexibility, and performance evaluation etc.
        Speaker: Dr Weidong Li (IHEP, Beijing)
        Slides
      • 10
        Approaching exascale for nuclear physics applications: diverse science requirements and energy constraints drive new paradigms in HPC
        Over its many-decade history, nuclear and particle physics research has been a driver for advances in high-performance computing (HPC) and has come to view HPC as an essential scientific capability. Indeed, the dawn of the twenty-first century has witnessed the widespread adoption of HPC as an essential tool in the modeling and simulation of complex scientific phenomena. And today, in 2017, many research institutions consider excellence in modeling and simulation via HPC, and associated capabilities in data analysis, to be essential in overcoming forefront problems in science and society. To this end, the United States (U.S.) Department of Energy’s (DOE) planned deployment of the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) in the 2018 timeframe will increase the computing capability available within the U.S. by an order of magnitude, for a performance of up to 200 petaflops. The system will result in a 5-10x increase in scientific application capability or performance, compared to today’s Titan supercomputer at OLCF. These technological advances and the increase in computing capability enable scientists to pursue ever more challenging research questions that in turn drive the need even more powerful systems. Looking forward toward the age of exascale computing, the DOE is engaged in an ambitious enterprise, integrating the Exascale Computing Project (ECP) (exascaleproject.org/) and the computing facilities at major DOE Laboratories, such as Oak Ridge National Laboratory (ORNL), to procure and deploy exascale supercomputers in the 2021 to 2023 time frame to deliver 50x to 100x today’s capabilities. Supporting this effort is a wide range of research-community-engagement activities, including exascale ecosystem requirements workshops sponsored by the DOE Office of Science held over the past two years. And a total of six topical and one cross-cut workshop reports are being finalized and published, (exascaleage.org). These combined efforts must address the key technical challenges to reach exascale computing capabilities: massive parallelism, memory and storage efficiencies, reliability, and energy consumption. Solutions to these challenges are needed in a form consistent highproductivity programming and user environments. Focusing on experiences within DOE’s Leadership Computing Facility Program at ORNL, this presentation will highlight the goals, current status, and next steps for DOE’s Summit project. I will highlight requirements from DOE’s Office of Science nuclear and particle physics users for integrated compute- and data-intensive capabilities that are likely to drive new operational paradigms within HPC centers of the future, such as data-intensive machine-learning applications, and the integration of high-throughput computing and high-performance computing workloads.
        Speaker: Dr Jack Wells (Oak Ridge National Laboratory)
        Paper
      • 11
        Experimental projects dedicated to the research of exotic nuclei in Dubna
        The development of the experimental base of the Flerov Laboratory (JINR, Dubna) assumed for the forthcoming 7-year period includes two principal directions. The first one implies the study of physical and chemical properties of nuclei in the vicinity of the so called “Stability Island”. This activity will be developed mainly on the base of the Super Heavy Elements (SHE) Factory. The factory, comprising the high current cyclotron DC280 and a number of new facilities is expected to be launched by the end of 2017. High intensity of accelerated beams and drastically improved parameters of new separators will secure the increase of the total efficiency of experiments at least by a factor of 1000. Another promising field of research is connected with the use of secondary beams of radioactive nuclei. The new fragment separator ACCULINNA-2 intended for studies in the region of light masses close to the nucleon drip lines was recently put into operation in the Flerov Laboratory. The scientific plan for forthcoming several years implies modernization of the operating accelerators aimed, particularly, at the essential increase of the energy of accelerated nuclei used for the production of radioactive beams. The technical changes combined with tried and tested experimental approaches provide luminosity of secondary beams on a physical target the level with that expected for the most advanced Radioactive Beam Factories.
        Speaker: Dr Sergey Sidorchuk (FLNR JINR)
        Slides
    • 16:00
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Alexei Klimentov (Brookhaven National Lab)
      • 12
        The ATLAS Production System Evolution
        The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS-specific workflows, across more than a hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based upon many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kinds of computational resources, such as GRID, clouds, supercomputers and volunteer computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resource utilization is one of the major features of the system. The Production System has a sophisticated job fault recovery mechanism, which efficiently allows running multi-terabyte tasks without human intervention. We have implemented new features which allow automatic task submission and chaining of different types of production. We present recent improvements of the ATLAS Production System and its major components: task definition and web user interface. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte Carlo and physics group production, and user analysis, are scheduled and executed within one production system on heterogeneous computing resources.
        Speaker: Mr Mikhail Borodin (The University of Iowa (US))
        Slides
      • 13
        ATLAS Trigger and Data Acquisition Upgrade Plans for High Luminosity LHC
        By 2026 the High Luminosity LHC will be able to deliver to its experiments at CERN 14 TeV proton-proton collisions with an order of magnitude higher instantaneous luminosity than the original design, at the expected value of 7.5 × 10^34 cm−2s−1. The ATLAS experiment is planning a series of upgrades to prepare for this new and challenging environment, which will produce much higher data rates and larger and more complex events than the current experiment was designed to handle. A broad physics program prepared for this fourth LHC run is driving the full upgrade plan, which will involve major changes in the detectors as well as in the trigger and data acquisition system. The detector upgrades themselves present new requirements but also new opportunities for radical changes in the trigger and data acquisition architecture. This presentation will describe the baseline architectures established for different luminosity scenarios, while also detailing ongoing studies into new system components and their interconnections. The overall challenge here is to meet low latency and high data throughput requirements within the limits given by technological evolution. One key aspect driving the design is the need for rate reduction, which will be based on easily identifiable high momentum electrons and muons. However, hadronic final states are also becoming important for investigations of the full phase space of the Standard Model and beyond. This is motivating the inclusion of both higher resolution first-level trigger information and a new hardware tracking system. The high throughput data acquisition system and the commodity hardware and software-based data handling and event filtering are also key ingredients to ensure maximum efficiency in recording data and processing. A discussion on the physics motivations and the expected performance based on simulation studies will be presented, together with the open issues and plans.
        Speaker: Sarah Demers (CERN)
        Slides
      • 14
        Niagara/Supermicro Innovation Technologies
        Speaker: NIAGARA COMPUTERS
        Slides
      • 15
        Software-Defined Networks
        Speaker: Fedor PAVLOV
        Slides
    • 20:00
      Welcome Party (Drinks&Buffet) Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary EGI and WLCG Evolution Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Ms Julia Andreeva (CERN)
      • 16
        EGI: advanced computing for research
        Speaker: Mr Yannick LEGRÉ (EGI Foundation)
        Slides
      • 17
        Lightweight Sites in the Worldwide LHC Computing Grid
        One of the goals of the WLCG Operations Coordination activities is to help simplify what the majority of the WLCG sites, i.e. the smaller ones, need to accomplish to be able to contribute resources in a useful manner, i.e. with large benefits compared to efforts invested. This contribution describes different areas of activities which aim to allow sites to be run with minimal oversight and operational effort from people at the sites themselves as well as beyond. These areas include several scenarios for deployment and management of the site resources, multiple paradigms for providing resources in general, both for compute and data management, as well as R&D activities involving data mining and machine learning algorithms for a better understanding of the monitoring and logging information, up to trend analysis which would allow further automation of operational tasks.
        Speaker: Maarten Litmaath (CERN)
        Slides
      • 18
        Improving site efficiency by integrating storage nodes and batch processing
        The T0 at CERN operates large storage and computing farms for the LHC community. For economic reasons the hardware of the disk servers is, with respect to CPU and memory, virtually identical to the one used in the batch nodes. Monitoring data showed that these nodes are not running anywhere close to their computational limit. Proof of concept tests have been conducted by Andrey Kiryanov showing that more than 80% of the node capacity can be used for computational tasks while creating no detrimental effect on the peak I/O rates. These results have been show at HEPIX 20017. Our team at CERN is expanding the concept, in the BEER (Batch on EOS Extra Resources ) project, to be ready to be integrated into the production service. The approach to partition the resources, the strategy for configuration management and results with production workloads will be shown.
        Speaker: Dr markus schulz (CERN)
        Slides
      • 19
        WLCG Data Management evolution
        Speaker: Dirk Duellmann (CERN)
        Slides
      • 20
        Combined analysis of storage and CPU resources at CERN
        CERN provides a significant part of the storage and cpu resources used for LHC analysis and is, similar to many other WLCG sites, preparing for a significant requirement increase in LHC run 3. In this context, an analysis working group has been formed at CERN IT with the goal to enhance science throughput by increasing the efficiency of storage and cpu services via a systematic statistical analysis of operational metrics. Starting from a more quantitative understanding of the use of available IT resources, we aim to support a joined optimisation with the LHC experiments and the joined planning of upcoming investments. I this talk we will describe the Hadoop based infrastructure used for preprocessing medium and long term (1 - 48 months) metric collections, some of the various tools used for aggregate performance analysis and prediction and we will conclude with some results obtained with this new infrastructure.
        Speaker: Dirk Duellmann (CERN)
        Slides
    • 11:10
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary EGI and WLCG Evolution Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Milos Lokajicek (Institute of Physics AS CR)
      • 21
        Federated data storage system prototype for LHC experiments and data intensive science
        Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our talk we will cover deployment and testing of federated data storage prototype for WLCG centers of different levels and university clusters within one Russian National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for scientific applications with access from Grid centers, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments. We will present topology and architecture of the designed system and show how it can be achieved using different software solutions such as EOS and dCache. We will also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of classic computing style.
        Speaker: Mr Andrey Kiryanov (PNPI)
        Slides
      • 22
        Evolution of tools for WLCG operations
        The WLCG infrastructure combines computing resources of more than 170 centers in 42 countries all over the world. Smooth operations of so huge and heterogeneous infrastructure is a complicated task which is performed by a distributed team. Constant growth of the amount of the computing resources and technology evolution which introduces new types of the resources like HPC and commercial clouds with the simultaneous decrease of the effort which can be dedicated to the operational tasks represent a challenge for the WLCG operations. The contribution will describe current development of the systems used for WLCG operations which include monitoring , accounting and information system.
        Speaker: Ms Julia Andreeva (CERN)
        Slides
      • 23
        Computing Resource Information Catalog: the ATLAS Grid Information system evolution for other communities
        The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centers affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centers all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the LHC experiments a central information system is required. It should provide description of the topology of the WLCG infrastructure as well as configuration data which are needed by the various WLCG software components and Experiment oriented services and applications. This contribution describes the evolution of the ATLAS Grid Information system (AGIS) into common Computing Resource Information Catalog (CRIC), the framework which is designed to describe the topology of the LHC Experiments computing models, providing unified description of resources and services used by experiments applications. CRIC collects information from various information providers (like GOCDB, OIM, central BDII and experiment-specific information systems), performs validation and provides a consistent set of WebUI and API to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments. Experiments are more and more relying on newer types of resources like opportunistic cloud or HPC resources, which by their nature are more dynamic and not integrated in any WLCG existing framework: also these resources need to be described in CRIC to allow the experiments to effectively exploit them. The implementation of CRIC was inspired by the successful experience with the ATLAS Grid Information System. In this note we describe recent developments of CRIC functionalities, definition of the main information model and the overall architecture of the system, which in particular provides clean functional decoupling between physical distributed computing capabilities and resources used by particular experiment into two parts: ● A core part which describes all physical service endpoints and provides a single entry point for experiments service discovery. ● Optional Experiment-specific extensions, implemented as plugins. They describe how the physical resources are used by the experiments and contain additional attributes and configuration which are required by the experiments for operations and organization of their data and workflows. CRIC not only provides a current view of the WLCG infrastructure, but also keeps track of performed changes and audit information. Its administration interface allows authorized users to make changes. Authentication and authorization are subject to experiment policies in terms of data access and update privileges.
        Speaker: Mr Alexey Anisenkov (BINP)
        Slides
      • 24
        Data Storage Evolution
        Speaker: Fedor PAVLOV
        Slides
    • 13:10
      LUNCH Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Detector & Nuclear Electronics Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Sergey Sidorchuk (FLNR JINR)
      • 25
        An upgraded TOF-ΔE1-ΔE2-E (DSSSD) based spectrometer for heavy-element research at the Dubna Gas-Filled Recoil Separator
        Two scenarios of modifying the DGFRS (the Dubna Gas Filled Recoil Separator) spectrometer of rare alpha decays are under consideration. Both of them imply use of integral 1M CAMAC analog-to-digital processor TekhInvest ADP-16 [1,2] as a basic unit in the spectrometer design. In scenario a) special unit (PKK-05) [3] will be used to measure horizontal position of the signal, without measuring its energy, whereas in scenario b) a complete amount (12 modules ADP-16 for 48x128 strips of DSSSD) are used to measure both energy and position signals. To measure signals of charged particles coming from cyclotron an upgraded gaseous low pressure TOF-ΔE1-ΔE2 module is used. To store TOF-ΔE1-ΔE2 information specific 1M module TekhInvest PA-3n-tof is used. First results of trial runs using the specific TekhInvest IMI-2011 pulser and test nuclear reaction natYb+48CaTh* are presented. New algorithm to search for ER-α-α…α(SF) sequences in a real-time mode is discussed taking into account commissioning in the nearest future of the new FLNR DC-280 cyclotron that is to provide beams of very high intensity [4]. An equivalent circuit for two neighbor strips of p-n junction side is proposed. It predicts a small non-linear ballistic effect for signals originating in inter-strip p-n junction area. Additionally, authors define abstract mathematical objects, like correlation graph and incoming event matrixes of a different nature to construct in a simple form a rare event detection procedure in a more exhaustive relatively the present one, using real-time detection mode. In that case one can use every from n∙(n-1)/2 correlation graph edges are used as a “trigger” for beam irradiation pauses to provide a “background free” condition to search for ultra rare alpha decays. Here n is a correlation graph nodes number. Schematics of these algorithms are considered. References [1] Yu.S.Tsyganov //Lett. to ECHAYA, 2016. Vol.13(203) pp.898-904 [2] A.N.Kuznetsov // ADP-16 TekhInvest manual. [3] V.G.Subbotin, A.M.Zubareva,A.A.Voinov, A.N.Zubarev, L.Schlattauer //Lett. to ECHAYA, 2016. Vol.13(203) pp.885-889 [4] G.G.Gulbekyan et al. // Project of DC-280 cyclotron Report at JINR Nucl. Phys. PAC
        Speaker: Mr Yury Tsyganov (JINR)
        Slides
      • 26
        Development of the autocalibration system for the DGFRS spectrometer based on the double-sided silicon strip detectors
        The detection system of the Dubna gas-filled recoil separator (DGFRS) aimed at the studying of the SHE nuclei and their decay properties has been modernized during last few years. The new set of multistrips double-sided silicon detectors (DSSD) in focal plane of DGFRS is applied now instead of the old array of 12-strips position-sensitive Si detectors. The total amount of measuring spectroscopic channels of the registering system has increased also up to 224 channels. It leads to more precise measuring of the energy and coordinate of the implanted nuclei of the SHE into the focal detectors and of their decay products. It is important to test the registering system and perform energy calibration before carrying out of such unique experiments on the synthesis of new nuclei from the “Island of stability”. This work is devoted to describe the designed method and produced specific digital module which allows to perform energy calibration for all 224 individual spectroscopic channels independently. This device provides automatic bypassing of the all individual channels one after another, and thus imitating charge particles incoming to the detectors. Energy of the imitating signal can be chosen from the range of 1 MeV up to 250 MeV with good amplitude linearity and stability.
        Speaker: Mr Alexey Voinov (FLNR JINR)
        Slides
      • 27
        New particle position determination modules for Double Side Silicon Strip Detector at DGFRS
        New particle position determination modules for Double side silicon strip detector were designed that allow to simplify existing multi-channel measurement system in search for the rare events of super heavy elements formation at DGFRS. The main principle is to search position correlated sequences of implanted SHE and followed alpha-particles/or SF events above predefined threshold Energy level in real-time for all 128 back strips. The resulting information is about providing the address of active strip and the coincidence sign. The newly developed system trigger passed the prototyping stage and is about to use in next experiment. This system will reduce the overall system dead time. This talk is about to description in deep of the CD32-5M coder units and the PKK-05 preregister unit briefly introduced by this abstract.
        Speaker: Mr Leo Schlattauer (Palacky University Olomouc, Czech Republic, JINR Dubna)
        Slides
      • 28
        NEUTRON GENERATORS AND DAQ SYSTEMS FOR TAGGED NEUTRON TECHNOLOGY
        At the T(d,n)He4 reaction each 14 MeV neutron is accompanied by a 3.5 MeV alpha- particle emitted in the opposite direction. A position- and time-sensitive alpha-detector measures time and coordinates of the associated alpha particle which allows determining time and direction (tags) of neutron escape. The tagged neutron technology is based on a time and spatial selection of events that occur when a tagged neutron moves through the object. The ING-27 neutron generators produced by VNIIA provide high intensity of tagged neutrons in a wide cone angle, the high spatial and time resolution of tagged the neutrons is provided by the pixelated alpha-detector. The requirements to DAQ system for various tagged neutron devices are reported. The architecture and parameters of DAQ system based on preliminary online selection of signals by analog front-end electronics and transmission of only useful events for subsequent computer processing are considered. The examples of tagged neutron devices for various applications are considered.
        Speaker: Dr Maxim Karetnikov (VNIIA)
      • 29
        Data acquisition systems on neutron spectrometers of the IBR-2 reactor
        The report describes the electronics and software of data acquisition systems for thermal neutron detectors [1], which are currently used on the spectrometers of the IBR-2 reactor at JINR. The experience gained during the operation of these systems has been summarized, the results of the performance analysis of the data acquisition systems developed in the FLNP for position-sensitive neutron detectors based on multi-wire proportional chambers with the readout from delay line are given. Requirements for the throughput of electronics and software functionality of such systems are refined. [1] Kulikov S.A., Prikhodko V.I., 2016 Physics of Particles and Nuclei 47(4) 702-10
        Speaker: Mr Vladimir Drozdov (FLNP JINR)
        Slides
    • Triggering, Data Acquisition, Control Systems Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Nikolai Gorbunov (JINR)
      • 30
        Increasing Bandwidth of Data Acquisition Systems on IBR-2 Reactor Spectrometers in FLNP
        Present trends towards increasing the number of detector channels and the volumes of registered and accumulated data in real time in experiments on IBR-2 reactor spectrometers in FLNP require increasing the bandwidth of data acquisition systems. The paper considers modernization of the data acquisition system based on the MPD and De-Li-DAQ-2D blocks earlier developed in FLNP and being widely used on neutron spectrometers today. Initially, for this system to connect the modules to the computer, the FLINK fiber-optic adapter with an USB2.0 interface was developed. In new projects aimed at development of the FLNP spectrometers, up to 240 detector elements are to be connected to MPD units with a maximum load of up to 8M events/s. This requires increasing the bandwidth of channels for connection with the computer to 50 MB/s, which is not feasible with the existing USB2.0 interface. To achieve the goal several variants of upgrading link interfaces for the modules De-Li-DAQ-2D and MPD used in the data acquisition system for the IBR-2 spectrometers have been developed. A ten times bandwidth increase has been realized by developing a new FLINK–USB3.0 adapter which enables a link between the fiber optic interface of the modules and the interface USB 3.0 of the computer.
        Speaker: Mr Vasilii Shvetcov (FLNP)
        Slides
      • 31
        Data management and processing system for Baikal-GVD
        Baikal-GVD is a gigaton volume underwater neutrino detector located in Lake Baikal. Compared to NT-200+, a previous iteration of the Baikal neurino observatory, Baikal-GVD represents a leap in complexity and raw data output. Therefore, a new, comprehensive data management infrastructure for transfer and analysis of the experimental data has been established. It includes a two-tier data storage, data quality control and online processing facilities. Experimental data is analyzed with BARS (Baikal Analysis and Reconstruction Software) - a framework, designed specifically for Baikal-GVD, that provides both low level interface for data analysis and a set of high level utilities for common tasks.
        Speaker: Mr Alexander Avrorin (INR RAS)
        Slides
      • 32
        Software Implementation of USB 3.0 Stack for Upgraded Data Link Interface on IBR-2 Reactor Spectrometers in FLNP
        In this work software implementation of USB3.0 stack protocols for operating data acquisition units of the IBR-2 spectrometric system with an upgraded communication adapter, is considered. The data acquisition system on De-Li-DAQ-2D and MPD blocks developed earlier in FLNP is widely used at present on neutron spectrometers. To connect the modules to the computer, an FLINK fiber optic adapter with an USB2.0 interface was originally developed for this system. Modern trends towards increasing the number of detector channels and the volumes of the recorded and accumulated information in real time in experiments on IBR-2 spectrometers in FLNP require increasing the bandwidth and reliability of the communication channel. In addition to replacing the driver and using the FTD3XX library of the FT600 chip to provide the USB Super Speed to FIFO bridge with a new communication adapter, improvement of software for an advanced application communication protocol with DAQ blocks is also required. Upgrading of the adapter and improvement of software for a new application-layer protocol have resulted in an increase of the bandwidth and reliability of the communication channel.
        Speaker: Ms Svetlana Murashkevich (RUSSIA, JINR)
        Slides
      • 33
        Tango software development at JINR. Tango module for WebSocket connection.
        The report describes Tango module for WebSocket connection. WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection. This module allows to carry out both monitoring and management of tango devices. The module also has several modes of operation. Depending on the selected mode, you can control both one and any number of required tango devices. The exchange of messages between the client and the server is in json format.
        Speaker: Mr Vladimir Gennadyevich Elkin (JINR VBLHEP)
      • 34
        LABVIEW BASED MAGNETIC FIELD MAPPING SYSTEM FOR CYCLOTRON DC-280
        New isochronous cyclotron DC-280 is being created at the FLNR JINR. The software application uses LabVIEW and supports pneumatic step movement, data acquisition, and magnet power supply control. A complete map of 360 degrees is obtained in approximately 14 hours, measuring 148750 field values with a spacing of 10 mm in radius every degree. This paper describes software part of the magnetic field mapping system based on LabVIEW DSC module.
        Speaker: Mr Vitali Aleinikov (JINR)
        Slides
      • 35
        Signal synchronization system
        The paper deals with creation of the concept and the first prototype of the new signal synchronization system of the Nuclotron accelerator complex. The text describes a new scheme for the collection and distribution of signals needed to synchronize many devices of the accelerator complex. Much attention is given to problems of the development of electronic equipment. The following modules are considered in the text: Optics to i/o converter, pulse-forming block, interface submodules. It should be stressed that the signal synchronization system in association with white rabbit technology will be the basis for creating the global timing system of the NICA.
        Speaker: Mr Ilya Shirikov (JINR)
        Slides
    • 16:00
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Detector & Nuclear Electronics Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Maxim Karetnikov (VNIIA)
      • 36
        Betatron tune measurement system upgrade at Nuclotron
        A few improvements have been made in order to enhance the resolution of the Q measurement system such as development of the additional NI FlexRIO digitizer module with two 18-Bit ADC AD7960 and TDC-GP22 for precision beam revolution frequency measurement. The new amplification system for picking up signals was developed using diode detection technique, analog filtering and real-time gain adjusting to allow carry out measurements during beam injection and acceleration.
        Speaker: Mr Dmitrii Monakhov (JINR)
        Slides
      • 37
        First Results of the Radiation Monitoring of the GEM Muon Detectors at CMS
        The higher energy and luminosity of future (HL) LHC imposed the development and testing of new type high-rate detectors as GEM (Gas Electron Multiplier). A monitoring system designed for measurement of the dose absorbed by the GEM detectors during the tests has been recently described [1]. The system uses a basic detector unit, called RADMON. There are in each unit two types of sensors: RadFETs, measuring the total absorbed dose of all radiations and p-i-n diodes – for particle (proton and neutron) radiation. The system has a modular structure, permitting to increase easily the number of controlled RADMONs – one module controls up to 12 RADMONs. For the first test, a group of 3 GEM chambers called supermodule was installed at the inner CMS endcap at March this years. One RADMON was installed at this supermodule for the dose monitoring. Through the dosimetric system controller, the measured data are transferred to the experiment data acquisition system. The real dose data are registering and will be processed and presented to the NEC 2017.
        Speaker: Dr Lubomir Dimitrov (INRNE - BAS)
        Slides
      • 38
        ESIS ions injection, holding and extraction control system
        During the work on the creation of a new Electron String Ion Source (ESIS) for the NICA/MPD project several electronic modules were created. Modules includes pulsed HV (+3 kV) potential barriers formation modules used to hold ions, HV (+3 kV) ion extraction module and several secondary modules. Modules development process and test results are discribed.
        Speaker: Mr Dmitriy Ponkin (LHEP JINR)
        Slides
      • 39
        Development of the Booster injection prototype
        The report describes requirements for the Booster injection system, its operation algorithm and realization details. The control system is based on National Instruments CompactRIO equipment and realizes injection devices control, synchronization and monitoring. The results of high voltage tests are presented.
        Speaker: Hristo Nazlev (JINR)
        Slides
    • Triggering, Data Acquisition, Control Systems Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Nikolay Gorbunov (Jinr)
      • 40
        The subsystem of the internal beam intensity diagnostics at the Nuclotron
        The superconducting synchrotron Nuclotron is the base of a new accelerator complex NICA designed at the LHEP, JINR. It is very important to monitor dynamics of its internal beam intensity during an acceleration cycle for proper tuning and functioning of the setup. The new parametric current transformer of Bergoz Instrumentation with frequency response DС to 10 kHz is used for measuring the beam DC intensity in the Nuclotron ring. Data acquisition is performed by multifunctional DAQ device NI6284 (18-bit) of National Instruments. The software complex provides an efficient work of intensity monitoring subsystem. It consists of the subsystem’s server and the several clients specialized for operators, stuff member and experimentalists. The software is designed in the TANGO Controls concept and fully integrated into the TANGO system of NICA. The structure, principle of functioning, operational experience in the recent Nuclotron runs and further improvement of the software complex are reported.
        Speaker: Mr Vasily Andreev (VBLHEP JINR)
        Slides
      • 41
        Trigger system and detection of Supernova in the NOvA experiment
        NOvA experiment utilizes a data acquisition system based on a continuous  deadtime-less readout of the front-end electronics. Performing physical analyses requires a triggering system, which can select the relevant events in this data flow.      The NOvA Data Driven Trigger system analyzes all the data collected by the NOvA  detectors in real time using hundreds of parallel instances of highly optimized analysis software running within ARTDAQ framework.      This talk is focused on the description on NOvA triggering system  and gives a description of a trigger for detection of the neutrino signal  from supernova explosion in our Galaxy.
        Speaker: Andrey Sheshukov (JINR)
        Slides
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Alexei Klimentov (Brookhaven National Lab)
      • 42
        INDIGO-DataCloud: Quality of Service in storage
        When preparing the Data Management Plan for larger scientific endeavours, PI’s have to balance between the most appropriate qualities of storage space along the line of the planned data lifecycle, it’s price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as available access protocols or authentication mechanisms. Negotiations between the scientific community and the responsible infrastructures generally happen upfront, where the amount of storage space, media types, like: disk, tape and SSD and the foreseeable data life-cycles are negotiated. With the introduction of cloud management platforms, both in computing and storage, resources can be brokered to achieve the best price per unit of a given quality. However, in order to allow the platform orchestrators to programatically negotiate the most appropriate resources, a standard vocabulary for different properties of resources and a commonly agreed protocol to communicate those, has to be available. In order to agree on a basic vocabulary for storage space properties, the storage infrastructure group in INDIGO-DataCloud together with INDIGO-associated and external scientific groups, created a working group under the umbrella of the “Research Data Alliance (RDA)”. As communication protocol, to query and negotiate storage qualities, the “Cloud Data Management Interface (CDMI)” has been selected. Necessary extensions to CDMI are defined in regular meetings between INDIGO and the “Storage Network Industry Association (SNIA)”. Furthermore, INDIGO is contributing to the SNIA CDMI reference implementation as the basis for interfacing the various storage systems in INDIGO to the agreed protocol and to provide an official OpenSource skeleton for systems not being maintained by INDIGO partners. In a first step, INDIGO will equip its supported storage systems, like dCache, StoRM, IBM GPFS and HPSS and possibly public cloud systems, with the developed interface to enable the INDIGO platform layer to programatically auto-detect the available storage properties and select the most appropriate endpoints based on its own policies. In a second step INDIGO will provide means to change the quality of storage, mainly to support data life cycle, but as well to make data available for low latency media for demanding HPC application before the requesting jobs are launched, which maps to the ‘bring online’ command in current HEP frameworks. Our presentation will elaborate on the planned common agreements between the involved scientific communities and the supporting infrastructures, the available software stack, the integration into the general INDIGO framework. Furthermore we’ll be able to demonstrate a first prototype of an example Web Service, providing a collective view into the storage space offerings of INDIGO partners across Europe, together with their different service qualities, entire based on the developed CDMI protocol extensions.
        Speaker: Dr Patrick Fuhrmann (DESY)
        Slides
      • 43
        High performance computing system in the framework of the Higgs boson studies
        The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of computer institutes' clusters, commercial computing resources and supercomputers. To perform precision measurements of the Higgs boson properties in these realities, it is also highly required to have effective instruments to simulate kinematic distributions of signal events. In this talk we give a brief description of the modern distribution reconstruction method called Morphing and perform few efficiency tests to demonstrate its potential. These studies have been performed on the WLCG and Kurchatov Institute’s Data Processing Center, including Tier-1 GRID site and supercomputer as well. We also analyze the CPU efficiency during these studies for different computing facilities. Reviewed approach demonstrates high efficiency and stability of the Kurchatov Institute's Data Processing Center in the field of the Higgs boson physics studies.
        Speaker: Mr Nikita Belyaev (NRC "Kurchatov Institute")
        Slides
      • 44
        DIRAC services for grid and cloud infrastructures
        Speaker: Dr Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
        Slides
      • 45
        ALFA:  ALICE-FAIR software framework
        ALFA is a message queue based framework for online/offline reconstruction. The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of this framework. Each process in ALFA assumes limited communication and reliance on other processes. Moreover, it does not dictate any application protocols but supports different serialization standards for data exchange between different hardware and software languages, e.g: Protocol Buffers, Flat Buffers, BOOST, MsgPack and ROOT. ALFA has a modular design with separate layers for data transport, process management and deployment, data format, etc. The transport layer in ALFA is called FairMQ, it supports different transports engines like ZeroMQ, nanomsg and shared memory transport. The modular design of ALFA and the interfaces between different layers will be presented.
        Speaker: Dr Mohammad Al-Turany (GSI/CERN)
        Slides
    • 11:00
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Patrick Fuhrmann (DESY)
      • 46
        STAR's GRID Production Framework
        STAR's RHIC computing facility provides over 15K dedicated slots for data reconstruction. However this number of slots is not always sufficient to satisfy an ambitious and data challenging Physics program and harvesting resources from outside facilities is paramount to scientific success. However, constraints of remote sites (CPU time limit) do not always always provide the flexibility of a dedicated farm. Though, experiments like STAR have a breadth of smaller datasets (both in Runtime and size) that can be easily offloaded to remote facilities. Scavenged resources optimizes local efficiency and contributes additional computing time to an experiment that runs every year and therefor needs fast turnaround. We will discuss STAR's software stack of our GRID production framework including features dealing with multi-site submission, automated re-submission, job trackingas well as new challenges and possible improvements.
        Speaker: Mr Levente Hajdu (BNL)
        Slides
      • 47
        Status of the project NICA at JINR
        The study of the heavy ion collisions is of the great interest in high energy physics due to expected phase transition from the nucleons to the quark gluon plasma. But to have a full picture of the effect there is a lack of experimental data at low energy region for the nuclei-nucleus collisions. The goal of the NICA project at JINR is to cover the collision energy range from 2 GeV/n to 11 GeV/n for the collisions of the protons and nuclei up to gold. The fixed target and collider NICA experiments will provide data with good quality and enough statistics to clarify physics of this phenomenon.
        Speaker: Dr Oleg Rogachevskiy (JINR)
        Slides
      • 48
        NICA Computing
        Speaker: Andrey Dolbilov (JINR)
        Slides
      • 49
        ATLAS BigPanDA Monitoring
        Speaker: Tatiana Korchuganova (TPU)
        Slides
    • 13:00
      LUNCH Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Excursion Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • 19:00
      CONFERENCE DINNER Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Prof. Ivan Vankov (Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences)
      • 50
        Modern SQL and NoSQL database technologies for the ATLAS experiment
        Structured data storage technologies evolve very rapidly in the IT world. LHC experiments, and ATLAS in particular, try to select and use these technologies balancing the performance for a given set of use cases with the availability, ease of use and of getting support, and stability of the product. We definitely and definitively moved from the “one fits all” (or “all has to fit into one”) paradigm to choosing the best solution for each group of data and for the applications that use these data. This talk describes the solutions in use, or under study, for the ATLAS experiment and their selection process and performance.
        Speaker: Prof. Dario Barberis (University and INFN Genova (Italy))
        Slides
      • 51
        The Trigger Readout Electronics for the Phase-1 Upgrade of the ATLAS Liquid-Argon Calorimeters
        The upgrade of the Large Hadron Collider (LHC) scheduled for the shut-down period of 2018-2019 (Phase-I upgrade), will increase the instantaneous luminosity to about three times the design value. Since the current ATLAS trigger system does not allow a corresponding increase of the trigger rate, an improvement of the trigger system is required. The new trigger signals from the ATLAS Liquid Argon Calorimeter will be arranged in 34000 so-called Super Cells which achieve 5-10 times better granularity than the current system; this improves the background rejection capabilities through more precise energy measurements, and the use of shower shapes to discriminate electrons and photons from jets. The new system will process the signal of the Super Cells at every LHC bunch-crossing at 12-bit precision and a frequency of 40 MHz. The data will be transmitted to the back-end using a custom serializer and optical converter with 5.12 Gb/s. To verify the full functionality, a demonstrator set-up has been installed on the ATLAS detector and operated during the LHC Run-2 of the LHC. The talk will give a status on hardware developments towards the final design readout system, including the performance of the newly developed ASICs. Their radiation tolerance, the performance of the prototype boards, results of the high-speed link test with the prototypes and the performance of the demonstrator with collision data will be also reported.
        Speaker: Mr Robert Wolff (CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
        Slides
      • 52
        The Phase-II upgrade of ATLAS Calorimeter
        This presentation will show the status of the upgrade projects of the ATLAS calorimeter system for the high luminosity phase of the LHC (HL-LHC). For the HL-LHC, the instantaneous luminosity is expected to increase up to L ≃ 7.5 × 10^{34} cm^{−2} s^{−1} and the average pile-up up to 200 interactions per bunch crossing. The Liquid Argon (LAr) calorimeter electronics will need to be replaced to cope with these challenging conditions: the expected radiation doses will indeed exceed the qualification range of the current readout system, and the upgraded trigger system will require much longer data storage in the electronics (up to 60µs), that the current system cannot sustain. The status on the R&D of the low-power ASICs (pre-amplifier, shaper, ADC, selializer and transmitters) and readout electronics design will be discussed. Moreover, a High Granularity Timing Detector (HGTD) is proposed to be added on front of the LAr calorimeters in the end-cap region (2.4 <|eta|< 4.2) for pile-up mitigation at Level-0 trigger level and offline reconstruction. The HGTD will correlate the energy deposits in the calorimeter to different proton-proton collision vertices by using time of flight information with high timing resolution (30 pico-second per readout cell) based on the Silicon sensor technologies. The current test beam results will be presented as well as performance expectation of the new detector.
        Speaker: Francesco Tartarelli (Università degli Studi e INFN Milano)
        Slides
      • 53
        The ATLAS Data Acquisition System in LHC Run 2
        The LHC has been providing proton-proton collisions with record intensity and energy since the start of Run 2 in 2015. In the ATLAS experiment the Data Acquisition is responsible for the transport and storage of the more complex event data at higher rates that the new collision environment implies. Data from events selected by the first level hardware trigger are subject to further filtration from software running on a commodity load balanced processing farm of some 2000 servers. During this time the data transferred from detector electronics across 1900 optical links to custom buffer hardware hosted across 100 commodity server PCs, and transferred across the system for processing by high bandwidth network at an average throughput of 30 GB/s. Accepted events are then transported to a data logging system for final packaging and transfer to permanent storage, with a final average output bandwidth of 1.5 GB/s. The whole system is actively monitored to maximise efficiency and minimise downtime. Due to the scale of the system and the challenging collision environment the ATLAS DAQ system is a prime example of the effective use of many modern technologies and standards in a high energy physics data taking environment, with state of the art networking, data transport and real time monitoring applications. This presentation will cover overall design of the system, focusing on the novel technology elements in use, before demonstrating its performance so far in LHC Run 2.
        Speaker: Mr Mikel Eukeni Pozo Astigarraga (CERN)
        Slides
    • 11:00
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Distributed Computing. GRID & Cloud computing Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
      • 54
        JINR Grid Tier-1@Tier-2
        The JINR grid infrastructure is a main component of the JINR Multifunctional Information and Computing Complex (MICC). There are two grid-sites: the Tier-1 for the CMS experiment at LHC and the Tier-2 which provides support to the virtual organizations (VOs) concerning the JINR participation in experiments at LHC (ATLAS, ALICE, CMS, LHCb), FAIR (CBM, PANDA), and other VOs (NICA, STAR, COMPASS, NOvA) within large-scale international collaborations with JINR researchers. The grid resources of the MICC JINR are a part of the global Worldwide LHC Computing Grid (WLCG) infrastructure, which was formed to support the LHC experiments.. Up to 2015 the main element of the JINR grid infrastructure was the Tier-2 center, one of the best resource centers of the Russian Data Intensive Grid (RDIG) and a part of the global grid infrastructure of WLCG and a member of the European EGI infrastructure. The official inauguration of the JINR Tier-1 for the CMS experiment in March 2015 marked a significant enhancement of the JINR grid computing infrastructure. This was an important contribution to the WLCG infrastructure. During past two years it was tuned and upgraded in order to cope with increasing amount of data coming from CMS experiment. The present status of the JINR grid infrastructure and plans for future development will be presented.
        Speaker: Dr Tatiana Strizh (JINR)
        Slides
      • 55
        Application of the HEP Computing Tools for Brain studies
        The Production and Distributed Analysis (PanDA) system was designed to meet the requirements for a workload management system (WMS) capable to operate at the LHC data processing scale. It has been used in the ATLAS experiment since 2005 and is now being part of the BigPanDA project expanding into a meta application, providing transparency of the data processing and workflow management for High Energy Physics (HEP) and other data-intensive sciences. BigPanDA was one of the first WMS systems expanded beyond the Grid to support the High Performance Clusters, Supercomputers, Clouds and Leadership Computing Facilities (LFC). This could not fail to attract the attention to the system from other compute intensive sciences such as brain studies. In 2017, the pilot project was started between teams of the BigPanDA and the Blue Brain Project (BBP) of the Ecole Polytechnique Federal de Lausanne (EPFL) located in Lausanne, Switzerland. This proof of concept project is aimed at demonstrating the efficient application of the BigPanDA system to support the complex scientific workflow of the BBP which relies on using a mix of desktop, cluster and supercomputers to reconstruct and simulate accurate models of brain tissue. During the first phase of the collaboration we demonstrated the execution of the computational jobs on a variety of BBP distributed computing systems powered. The targeted systems for demonstration included: Intel x86-NVIDIA GPU based BBP clusters located in Geneva (47 TFlops) and Lugano (81 TFlops), BBP IBM BlueGene/Q supercomputer ( 0.78 PFLops and 65 TB of DRAM memory) located in Lugano, the Titan Supercomputer with peak theoretical performance 27 PFlops operated by the Oak Ridge Leadership Computing Facility (OLCF), and Cloud based resources such as Amazon Cloud. To hide execution complexity and simplify manual tasks by end-users, we developed a web interface to submit, control and monitor user tasks and seamlessly integrated it with the PanDA WMS system. The project demonstrating that the software tools and methods for processing large volumes of experimental data, which have been developed initially for experiments at the LHC accelerator, can be successfully applied to other scientific fields.
        Speaker: Dr Alexei Klimentov (Brookhaven National Lab)
        Slides
      • 56
        COMPASS Grid Production System
        LHC Computing Grid was a pioneer integration effort, managed to unite computing and storage resources all over the world, thus made them available to experiments on Large Hadron Collider. During decade of LHC computing, Grid software has learned to effectively utilize different types of computing resources, such as classic computing clusters, clouds and hyper power computers. And while the resources experiments use are the same, data flow differs from experiment to experiment. A crucial part of each experiment computing is a production system, which describes logic and controls data processing of the experiment. COMPASS always relied on CERN facilities, and, when CERN, during hardware and software upgrade, started migration to resources, available only via Grid, faced the problem of insufficiency of resources to process data on. To make COMPASS data processing able to work via Grid, development of the new production system has started. Key features of modern production system for COMPASS are: distributed data processing, support of different type of computing resources, support of arbitrary amount of computing sites. Build blocks for the production system are taken from achievements of LHC experiments, but logic of data processing is COMPASS-specific and unique. Details of implementation of Grid production system for COMPASS are described in the report.
        Speaker: Mr Artem Petrosyan (JINR)
        Slides
      • 57
        Optimizing new components of PanDA for ATLAS production on HPC resources
        The Production and Distributed Analysis system (PanDA) has been used for workload management in the ATLAS Experiment for over a decade. It uses pilots to retrieve jobs from the PanDA server and execute them on worker nodes. While PanDA has been mostly used on Worldwide LHC Computing Grid (WLCG) resources for production operations, R&D work has been ongoing on cloud and HPC resources for many years. These efforts have led to the significant usage of large scale HPC resources in the past couple of years. In this talk we will describe the changes to the pilot which enabled the use of HPC sites by PanDA, specifically the Titan supercomputer at Oakridge National Laboratory. Furthermore, it was decided in 2016 to start a fresh redesign of the Pilot with a more modern approach to better serve present and future needs from ATLAS and other collaborations that are interested in using the PanDA System. Another new project for development of a resource oriented service, PanDA Harvester, was also launched in 2016. The main goal of the Harvester is flexible distribution of payloads for opportunistic resources like HPC and clouds. Both applications are now in full development after a year of studying use cases, trying different designs and deciding on the shared components model. This talk will give an overview of the evolution of the HPC pilot into Pilot 2 and Harvester projects for better utilization of HPC resources.
        Speaker: Mr Danila Oleynik (JINR LIT)
        Slides
      • 58
        Applying Big Data solutions for log analytics in the PanDA infrastructure
        PanDA is the workflow management system of the ATLAS experiment at the LHC and is responsible for generating, brokering and monitoring up to two million jobs per day across 150 computing centers in the Worldwide LHC Computing Grid. The PanDA core consists of several components deployed centrally on around 20 servers. The daily log volume is around 400GB per day. In certain cases, troubleshooting a particular issue on the raw log files can be compared to searching for a needle in a haystack and requires a high level of expertise. Therefore we decided to build on trending Big Data solutions and utilize the ELK infrastructure (Filebeat, Logstash, Elastic Search and Kibana) to process, index and analyze our log files. This allows to overcome troubleshooting complexity, provides a better interface to the operations team and generates advanced analytics to understand our system. This paper will describe the features of the ELK stack, our infrastructure, optimal configuration settings and filters. We will provide examples of graphs and dashboards generated through the ELK system to demonstrate the potential of the system. Finally, we will show the current integration of Kibana with the PanDA monitoring frontend and other usage possibilities, such as proactive notification of exceptions in the system.
        Speaker: Mr Fernando Barreiro Megino (University of Texas at Arlington)
        Slides
      • 59
        Experience with containers for OSG NovA jobs
        Today's physics experiments strongly rely on computing not only during data taking periods, but huge amount of computing resources is necessary later for offline data analysis to obtain precise physics measurements out of the enormous amount of recorded raw data and Monte-Carlo simulations. Large collaborations with members from many countries are essential for successful research on complex experimental infrastructure and within such organization it was natural to use distributed computing model. Besides broad physics program Institute of Physics of the Czech Academy of Sciences (FZU) also serves as a regional computing center that supports grid computing for several big experiments (WLCG, OSG, ...) and local user's analysis. It is becoming difficult to provide optimal uniform computing environment to the growing number of supported user groups and their different or even contradictory requirements. Also we would like to explore new features that comes with modern systems, but often software used by experiments is not certified for latest version and experiments in their final phase doesn't really want any changes in their computing environment. To satisfy all our users requirements, most efficient use of modern hardware and optimal utilization of all resources in our cluster we decided to upgrade our local batch system to HTCondor. HTCondor provides us means to run jobs in isolated and per experiment specific environment by utilizing lightweight container technology. With jobs running in containers we can still accept OSG NOvA grid jobs while at the same time we install modern OS on our new hardware.
        Speaker: Mr Petr Vokac (Institute of Physics of the Czech Academy of Sciences)
        Slides
    • Triggering, Data Acquisition, Control Systems Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Mr Robert Wolff (CPPM, Aix-Marseille Université, CNRS/IN2P3 (FR))
      • 60
        The ATLAS Trigger system upgrade and performance in Run 2
        The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event rates and in a reduction of fake candidate particles. A new hardware system, designed to analyse event-topologies, supports a more refined event selection at the Level-1. A hardware-based high-rate track reconstruction, currently being commissioned, enables the software trigger to make use of tracking information at the full input rate. Together with a re-design of the high-level trigger to deploy more offline-like reconstruction techniques, these changes improve the performance of the trigger selection turn-on and efficiency to nearly that of the offline reconstruction. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remains an ongoing endeavour. Thereby coping with the large number of pile-up events is one of the most prominent challenges. This presentation gives a short review the ATLAS trigger system and its performance in 2015/16 before describing the significant improvements in selection sensitivity and pile-up robustness, which we implemented in preparation for the expected highest ever luminosities of the 2017/18 LHC.
        Speaker: Savanna Shaw (University of Manchester)
        Slides
      • 61
        The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider
        The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the offline data storage in terms of both size and rate. To cope with the high expected interaction rates in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in the 2016 data from 13 TeV LHC collisions has been excellent and exceeded expectations as the interaction multiplicity increased throughout the year. The detailed efficiencies and resolutions of the trigger in a wide range of physics signatures are presented, to demonstrate how the trigger responded well under the extreme pileup conditions. The performance of the ID Trigger algorithms in the first data from the even higher interaction multiplicity collisions from 2017 are presented, and illustrates how the ID tracking continues to enable the ATLAS physics program currently, and will continue to do so in the future.
        Speaker: Callum Kilby (on behalf of the ATLAS collaboration)
        Slides
      • 62
        Performance of the ATLAS Muon Trigger in Run 2
        Events containing muons in the final state are an important signature for many analyses being carried out at the Large Hadron Collider (LHC), including both standard model measurements and searches for new physics. To be able to study such events, it is required to have an efficient and well-understood muon trigger. The ATLAS muon trigger consists of a hardware based system (Level 1), as well as a software based reconstruction (High Level Trigger). Due to high luminosity and pile up conditions in Run 2, several improvements have been implemented to keep the trigger rate low while still maintaining a high efficiency. Some examples of recent improvements include requiring coincidence hits between different layers of the muon spectrometer, improvements for handling overlapping muons, and optimised muon isolation. We will present an overview of how we trigger on muons, recent improvements, and the performance of the muon trigger in Run 2 data.
        Speaker: Dr Marcus Morgenstern (on behalf of the ATLAS collaboration)
        Slides
      • 63
        The ATLAS Trigger Menu design for higher luminosities in Run 2
        The ATLAS experiment aims at recording about 1 kHz of physics collisions, starting with an LHC design bunch crossing rate of 40 MHz. To reduce the large background rate while maintaining a high selection efficiency for rare physics events (such as beyond the Standard Model physics), a two-level trigger system is used. Events are selected based on physics signatures such as the presence of energetic leptons, photons, jets or large missing energy. The trigger system exploits topological information, as well as multivariate methods to carry out the necessary physics filtering for the many analyses that are pursued by the ATLAS community. In total, the ATLAS online selection consists of nearly two thousand individual triggers. A Trigger Menu is the compilation of these triggers, it specifies the physics selection algorithms to be used during data taking and the rate and bandwidth a given trigger is allocated. Trigger menus must reflect the physics goals of the collaboration for a given run, but also take into consideration the instantaneous luminosity of the LHC and limitations from the ATLAS detector readout and offline processing farm. For the 2017 run, the ATLAS trigger has been enhanced to be able to handle higher instantaneous luminosities (up to 2.0x10^{34}cm^{-2}s^{-1}) and to ensure the selection robustness against higher average multiple interactions per bunch crossing. In this presentation we describe the design criteria for the trigger menu for Run 2. We discuss several aspects of the process of planning the trigger menu, starting from how ATLAS physics goals and the need for detector performance measurements enter the menu design, and how rate, bandwidth, and CPU constraints are folded in during the compilation of the menu. We present the tools that allow us to predict and optimize the trigger rates and CPU consumption for the anticipated LHC luminosities. We outline the online system that we implemented to monitor deviations from the individual trigger target rates and to quickly react to changing LHC conditions and data taking scenarios. Finally we give a glimpse of the 2017 Trigger Menu, allowing the listener to get a taste of the vast physics program that the trigger is supporting.
        Speaker: Emma Torro (Valencia U., IFIC)
        Slides
      • 64
        The development of Online Event Display using ATLAS TDAQ for the NICA experiments
        One of the problems to be solved in high energy physics experiments on particle collisions and fixed target experiments is online visual presentation of the events during the experiment run. The report describes the implementation of this task, so called Online Event Display, for the current BM@N experiment and the future experiment MPD (Multi-Purpose Detector) at the Nuclotron-based Ion Collider facility (NICA) under construction at the Joint Institute for Nuclear Research. One of the main aspects of the development, which will be shown in the presentation, is the integration of ATLAS TDAQ components to transfer raw event data for visualization in the Online Event Display. The report includes brief description of these TDAQ components. Another important issue that will be discussed is dedicated to speeding up the track reconstruction to increase the number of the events viewed in the monitoring system per second. The implemented event display designed for use in offline and online modes with its options and features as well as integration with our software environments (BmnRoot and MpdRoot) are considered. The examples of graphical representation of simulated and reconstructed points and particle tracks with BM@N and MPD geometries will be shown for collisions with different energies and particles, such as deuterons, carbons and gold ions.
        Speaker: Dr Konstantin Gertsenberger (JINR)
        Slides
      • 65
        MultiPurpose Detector - MPD
        The multipurpose MPD detector is the main tool for studying the properties of hot and dense baryonic matter formed in collisions of heavy ions at the NICA accelerator complex. The sufficiently high luminosity of the collider, the complexity and diversity of the physical tasks make high demands on the performance of detectors and service systems of the MPD. The report gives a brief overview of the physical program of the MPD experiment and some technical characteristics of all the main elements of the experimental setup.
        Speaker: Mr Vadim Babkin (Joint Institute for Nuclear Research)
        Slides
    • 13:00
      LUNCH Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Distributed Computing. GRID & Cloud computing Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Tatiana Strizh (JINR)
      • 66
        JINR Member States cloud infrastructure
        One of the possible ways to speed up scientific research projects which JINR and organizations from its Member States participate in is to join computational resources. It can be done in a few ways one of which is to build distributed cloud infrastructures integrating local private clouds of JINR and organizations from its Member States. To implement such scenario a cloud bursting based approach was chosen since it provides more flexibility in resources management of each cloud integrated into such distributed infrastructure. For the time being a few private clouds from some organizations from JINR Member States has been already integrated with the JINR one with help of custom cloud bursting driver developed by the JINR team. Various aspects of that activity are covered in more details in the article as well as implemented and planned changes in the JINR cloud such as a smart scheduler deployment in production, migration from DRBD high-availability (HA) to native OpenNebula HA, Ceph storage deployment and VMs' disks migration to it and others.
        Speaker: Mr Nikita Balashov (JINR)
        Slides
      • 67
        Multy-level monitoring system for Multifunctional Information and Computing Complex at JINR
        Multifunctional Information and Computing Complex(MICC) is one of the basic scientific facilities of Joint Institute for Nuclear Research. It provides a 24×7 implementation of a vast range of competitive research conducted at JINR at a global level. MICC consists of for major components: grid-infrastructure, central computing complex, JINR private cloud and high performance heterogeneous cluster HybriLIT. All major components rely on network and engineering infrastructure. It is important to supervise all of the components on three levels: hardware level, network level, and service level. Currently there are many monitoring systems built on different technologies which are used by different user groups and administrators. All monitoring systems are mostly independent despite the fact that some systems collect the same monitoring data. Their independence makes it difficult to see the whole picture of effectiveness and bottle-necks of MICC because the data are scattered among many systems. The role of multy-level monitoring system for MICC is to unite existing systems and solve that problem: provide high level information about the whole computing complex and its services. All MICC current monitoring systems, approaches and methods are described and analyzed in this work: monitoring platforms, data collection methods, data storage, visualization, notification and analytics.
        Speaker: Mr Igor Pelevanyuk (JINR)
        Slides
      • 68
        JINR cloud computing in the NOvA experiment
        NOvA is a large-scale neutrino experiment JINR takes part in in many directions including those connected to information technologies usage. A cloud resource was provided by the JINR computing center for the NOvA experiment within which a pool of virtual machines was deployed to provide local JINR users a way of using them in interactive mode giving the users ability to use this service for neutrino events modeling, supporting experimental data acquisition and control and performing physical analysis.
        Speaker: Oleg Samoylov (JINR)
        Slides
      • 69
        Optimization of the JINR Cloud’s Efficiency
        Clouds built on Infrastructure-as-a-Service (IaaS) model (such as JINR Cloud) gave us new universal and flexible tools and ways to use computing resources. These new tools may help scientists speed up their research work but at the cost of a significant drop (compared to the more traditional systems in science such as grid) of the overall utilization efficiency of an underlying infrastructure. The talk covers Smart Cloud Scheduler project aimed at optimizing the performance of the IaaS-based clouds, including its architecture, development status and plans. The project includes development of a software framework that would allow one to implement custom schemes of dynamic reallocation and consolidation of virtual machines. The resulting system will give a possibility to dynamically rebalance the cloud workload in an automated fashion in order to increase the overall infrastructure utilization efficiency.
        Speaker: Mr Nikita Balashov (JINR)
        Slides
      • 70
        Clouds of JINR, University of Sofia and INRNE Join Together
        JINR develops a cloud based on OpenNebula that is opened for integration with the clouds from the member states. The paper presents state of the 3 years project that aims to create a backbone of that cloud in Bulgaria. University of Sofia and INRNE participate in that initiative. This is a target project funded by JINR based on the research plan of the institute.
        Speaker: Prof. Vladimir Dimitrov (University of Sofia)
        Slides
      • 71
        NFV Cloud Infrastructure for Researchers
        Modern research cloud infrastructures purposed to help researcher to prepare virtual environment that satisfy various specific requirements. The focus could be set on a network topology and providing different network functions (NAT, Firewall, IDS, vSwitch etc.) in order to provide testbed for network research, or a network device testing. Another focus could be set on compute resources providing researcher the computational cluster, for example Hadoop-cluster. Regardless of purposes the researcher uses the cloud infrastructure we need the unified system that manages and orchestrates all types of cloud infrastructure resources. Network Function Virtualization (NFV) techniques separates network function logic from the hardware, which executes the first. There are several basic use cases for NFV. The network function (NF) lifecycle is stage process, that means that each NF passed through deployed stage, initialization stage, configuration stage, execution stage and undeploying stage. In our Demo the NF lifecycle managing and orchestration in DC will be demonstration. We would like to present a cloud platform architecture that adheres to the reference implementation ETSI NFV MANO model. As we will show the platform architecture and design successfully meet the requirements of researcher cloud VNFs. We call our platform Cloud Conductor or C2. The C2 platform provides the full VNF life-cycle support: initialization, configuration, execution and uninitialization. The C2 Platform provides virtual network services (VNS) to researchers by CPE virtualization. Virtual customer premise equipment (vCPE) is a way to deliver network services such as routing, firewall security and access to computational resources by using software rather than dedicated hardware devices. By virtualizing the CPE, the C2 Platform can simplify and accelerate service delivery, remotely configuring and managing devices and allowing researchers to order new services or adjust existing ones on demand. Especially will be discussed the CloudGW module solution, that is aimed to provide an entry border point for traffic from researcher to a virtual cloud infrastructure for further processing, depending on required virtual infrastructure (DBaaS, Hadoop, classic IaaS and etc.). The main value of a general service-oriented platform is the diversity of the predefined VNFs that can be deployed on this platform and the quality of service this cloud platform guarantees to researchers. We believe it is fundamentally wrong to create dedicated VNFs for a specific cloud platform. All VNF descriptions should be done based on a standard domain-specific language (DSL) to provide isolation between a cloud platform API and a VFN implementation. For the purposes of VNF specification we use TOSCA (Topology and Orchestration Specification for Cloud Applications). This description, called TOSCA-template, is everything that is needed to describe a VNF. It covers the structure of a cloud application, its management policies, specifies an OS image and scripts, which can start, stop and configure the application that implements the VNF. The C2 platform assumes that a cloud manager should provide the TOSCA-template as a zip or tar archive with predetermined structure. We’d like to share the results of over NFV MANO platform overheads analysis and discuss our future plans.
        Speaker: Mr Ruslan Smeliansky (ARCCN)
        Slides
    • Triggering, Data Acquisition, Control Systems Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Oleg Rogachevskiy (JINR)
      • 72
        Status of the test stand TOF MPD system
        This report is devoted to the status of the test stand of the TOF MPD system. The stand is planned to be used to carry out methodical research and mass testing of detectors for the MPD experiment at the NICA collider. The setup is described in detail. The investigation has been performed at the Veksler and Baldin Laboratory of High Energy Physics, JINR.
        Speaker: Mr Alexandr Dmitriev (LHEP JINR)
        Slides
      • 73
        Development of L0 trigger for study of AA- collisions in BM@N/Nuclotron and MPD/NICA experiments
        L0 trigger system plays a crucial role in the fast and effective selection of AA- collisions in both fixed target and collider experiments. The concepts of an active target area of the BM@N/Nuclotron experiment and a fast vertex-trigger system developed for the MPD experiment at NICA collider are considered. The requirements to trigger detectors and electronics as well some test results are discussed.
        Speaker: Dr Vladimir Yurevich (JINR)
        Slides
      • 74
        Readout electronics for TPC detector in the MPD/NICA project
        The TPC barrel is placed in the middle of a Multi-Purpose Detector and provides tracing and identifying of charged particles in the pseudorapidity range │η│≤ 1.2. Tracks in the TPC are registered by 24 readout chambers placed at both end-caps of the sensitive volume of the barrel. The readout system of one chamber consists of the front-end card (FEC) set and a readout control unit (RCU). FECs collects signals directly from the registration chamber pads, amplifies them, digitizes, processes and transfers it to the RCU. To ensure good reconstruction of all tracks, the 95232 electronic channels must meet strong requirements: the signal to noise ratio – 30, the equivalent noise charge < 1000 e-, power consumption less than 100 mW per channel.
        Speaker: Mr Stepan Vereschagin (JINR)
        Slides
      • 75
        Multi-Purpose Detector (MPD) Slow Control System, historical background, present status and plans
        The Multi-Purpose Detector (MPD) is a 4π spectrometer capable of detecting of charged hadrons, electrons and photons in heavy-ion collisions at high luminosity in the energy range of the NICA collider. Among many others one of the crucial tasks necessary for successful operation of such a complex apparatus is providing an adequate monitoring of operational parameters and convenient control of various equipment used in the experiment. In the report presented approaches and basic principles of development of the SlowControl system for the MPD. Tango Control System based approach allows to unify representation and storage of slowcontrol data from many diverse data sources. Presently running BM@N experiment serves as a perfect testbench for the software. Special attention paid to integrity of slowcontrol data and operation stability. Present status and plans of design of slowcontrol system for the MPD is also presented.
        Speaker: Mr Vitaly Shutov (Borisovich)
        Slides
      • 76
        Control system of the superconducting magnets cryogenic test bench for the NICA accelerator complex
        Control system of the superconducting magnets cryogenic test bench has been designed in Tango Controls format. It includes the Thermometry system and the Satellite refrigerators control system. The report describes hardware and software modules for data acquisition and management, archiving system, configuration system, access control system, web service and web client applications.
        Speaker: Mr Georgy Sedykh (JINR)
        Slides
      • 77
        Data acquisition and data processing software for magnetic measurements for NICA magnets
        The software, which has been used for the magnetic measurements test bench for superconducting magnets of NICA and FAIR projects is described. Main measurement program that is used in order to collect measured data, and is responsible for sensor position as well as software for processing measured data are presented. Filtering and smoothing algorithm based on wavelets and splines that were used before data processing are also described.
        Speaker: Mr Alexander Bychkov (LHEP)
        Slides
    • 16:00
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Distributed Computing. GRID & Cloud computing Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Tatiana Strizh (JINR)
      • 78
        Approaches to building of Cloud base scientific computing infrastructure
        In the paper presented results of works focused on building of heterogeneous Cloud base scientific computing infrastructure. Main purpose of infrastructure is to provide for researchers a possibility to access ”on demand” a wide range of different types of resources, that can be physically located in local, federated and GEANT offering clouds. These resources include pure and customized Virtual Machines with preinstalled and configured software, GRID and HPC facilities on the base of virtualization paradigm within integrated Cloud infrastructure. Considered creation of “Centre of Excellence” where researcher can start with parallel clusters’ systems study, development and debugging of initial versions of parallel applications for further scaling them to resources that are more powerful. The aim of proposed infrastructure is to provide to researchers access to multi-cloud platform with horizontal and vertical scaling, self-healing (the ability of a system to recover from failures) and with different SLA levels, depending on time of researcher experiments. To ensure operation of federated mechanism to access distributed computing resources were investigated approaches and finalized works to realize solutions that allow providing unified access to cloud infrastructures and be integrated in the Research & Educational identity management federations operated within eduGAIN inter-federation authorization & authentication mechanism (AAI) Perspectives of utilization of virtualization technologies for integration of Grid and HPC clusters in heterogeneous computer infrastructures that are offering effective computing resources and end-user interfaces are considered. Keywords: distributed computing technology, Cloud computing, High Performance Computing, computational clusters, Federated Cloud on-demand Services
        Speaker: Mr Nicolai Iliuha (RENAM)
        Slides
      • 79
        Service Reliability with the Cloud of Data Centers under Openstack
        University ITMO (ifmo.ru) is developing the cloud of geographically distributed data centers under Openstack. The term “geographically distributed” in our proposal means data centers (DC) located in different places far from each other by hundred or thousand kilometers. Authors follow the conception of “dark” DC, i.e the DC has to perform normal operation without permanent maintainers even with minor problems (single machine or a number of disk drives went down). Periodically staff might visit the DC and fix the problems. It helps to reduce the overall maintenance cost for cloud of DC if reliable operation schema would be implemented. The proposal includes many aspects. In between the topics: the network architecture and features, main storage engine, …, service reliability. The last one is probably most important. Authors plan to describe thoughts and experiments with service reliability for geographically distributed data centers under Openstack. In particular it is planned to discuss Openstack deployment configurations and failover recovery tools.
        Speaker: Mr andrey shevel (PNPI, ITMO)
        Slides
      • 80
        Resource sharing based on HTCondor for multiple experiments
        HTCondor, a scheduler focusing on high throughput computing has been more and more popular in high energy physics computing. The HTCondor cluster with more than 10,000 cpu cores running at computing center, institute of high energy physics in China, supports several HEP experiments, such as JUNO, BES, Atlas, Cms etc. The work nodes owned by the experiments are managed by HTCondor. A sharing pool including the work nodes contributed by all HEP experiments has been created to meet the peak computing requirement from the different experiments during different time periods. To manage the sharing pool, a database is used to store the cluster’s information including nodes and groups attributes. The attributes can be adjusted by the cluster manager and published to both scheduler servers and work nodes via http protocol. A monitoring dog is developed to monitor the work nodes health status and report to the database. Both servers and work nodes update their own configuration based on the attributes published by the database. The whole resource utilization rate of the cluster has been promoted from 50% to more than 80% after the sharing pool is created.
        Speaker: Dr Jingyan Shi (INSTITUTE OF HIGH ENERGY PHYSICS, Chinese Academy of Science)
        Slides
      • 81
        PIK Computing Centre
        In the framework of the PIK nuclear reactor reconstruction project, a new PIK Computing Centre was commissioned this fall, the main task of which will be storage and processing of PIK experiments data. The Centre's capacity will also be used by other scientific groups at PNPI for solving problems in different areas of science such as computational biology and condensed matter physics. It will also become an integral part of computing capacities of NRC "Kurchatov Institute". The PIK Computing Centre has a heterogeneous structure and consists of several types of computing nodes suitable for a wide range of tasks and two independent data storage systems, all of which are interconnected with a fast InfiniBand network. The engineering infrastructure provides redundant main power and two independent UPS installations for computing equipment and for cooling system.
        Speaker: Mr Andrey Kiryanov (PNPI)
        Slides
    • Triggering, Data Acquisition, Control Systems Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Oleg Rogachevskiy (JINR)
      • 82
        Slow Control system at BM@N experiment
        Big modern physics experiments represent a collaboration of workgroups and require wide variety of different electronic equipment. Besides trigger electronics or Data acquisition system (DAQ), there is a hardware that is not time-critical, and can be run at a low priority. Slow Control system are used for setup and monitoring such hardware. Slow Control systems in a typical experiment are often used to setup and/or monitor components such as high voltage modules, temperature sensors, pressure gauges, leak detectors, RF generators, PID controllers etc. often from a large number of hardware vendors. Slow Control system also has to archive revieved data for further analysis and handling by physicists and to warn personnel about critical situations and contingency.
        Speaker: Mr Dmitry Egorov (JINR)
        Slides
      • 83
        Trigger electronics for BM@N setup in 2017
        The BM@N facility is a fixed target experiment based on heavy ion beams of the Nuclotron-M accelerator. The aim of the BM@N is to study nucleus – nucleus collisions at energies up to 4.5 GeV per nucleon. Our group is responsible to develop triggers system for this experiment. The described trigger system has been developed at LHEP/JINR for trigger generation in the BM@N experiments. The trigger and start detectors fast signals of MCP-PMTs and SiPMs are used as input signals for the trigger processing. The trigger system consist of detectors with fast front-end electronics (FEE), power supplies for detectors and FEE and a level 0 trigger processor unit (Trigger L0 unit, T0U). The T0U is used to generate a BM@N zero level trigger and a TOF detector precise start. T0U generates trigger signal based on the beam line, the target area and the barrel detector signals. This report presents a concept, characteristics and a performance of the trigger system during the B@MN last runs.
        Speaker: Mr Victor Rogov (JINR)
        Slides
      • 84
        Online monitoring system for the BM@N experiment
        The BM@N experiment is the crucial stage in the technical development of the NICA project. In order to effectively maintain experiment it is extremely important to have uniform for all detectors, fast and convenient tool to monitor experimental facility. The system implements decoding of the incoming raw data on the fly, preprocessing and visualization on the webpage. Users can monitor any detector subsystem, select specific detector's plane/station, time or strip profile histograms in 1/2/3D view. The system is developed as a part of the BmnRoot package with the use of the CERN jsROOT library. The lighttpd webserver is used.
        Speaker: Mr Ilnur Gabdrakhmanov (VBLHEP)
        Slides
    • Best students reports Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Alexei Klimentov (Brookhaven National Lab)
    • Plenary Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Vladimir Korenkov (JINR)
      • 85
        Novel approach to the particle track reconstruction based on deep learning methods
        A fundamental problem of data processing for high energy and nuclear physics (HENP) experiments is the event reconstruction. The main part of it is finding tracks among a great number of so-called hits produced on sequential co-ordinate planes of tracking detectors. The track recognition problem consists in joining these hits into clusters, each of them joins all hits belonging to the same track, one of many others, discarding noise and fake hits. Such a procedure named tracking is especially difficult for modern HENP experiments with heavy ions where detectors register events with very high multiplicity. Besides, this problem is seriously aggravated due to the famous shortcoming of quite popular multiwired, strip and GEM detectors where the appearance of fake hits is caused by extra spurious crossings of wires or strips, while the number of those fakes is greater for some order of magnitude than for true hits. Here we discuss the novel two steps technique based on hit preprocessing by a sophisticated directed search followed by applying a deep learning neural network. Preliminary results of our approach for simulated events are presented.
        Speaker: Prof. Gennady Ososkov (Joint Institute for Nuclear Research)
        Slides
      • 86
        Predictive analytics as an essential mechanism for situational awareness at the ATLAS Production System
        The workflow management process should be under the control of the certain service that is able to forecast the processing time dynamically according to the status of the processing environment and workflow itself, and to react immediately on any abnormal behaviour of the execution process. Such situational awareness analytic service would provide the possibility to monitor the execution process, to detect the source of any malfunction, and to optimize the management process. The stated service for the second generation of the ATLAS Production System (ProdSys2, an automated scheduling system) is based on predictive analytics approach to estimate the duration of the data processings (in terms of ProdSys2, it is task and chain of tasks) with later usage in decision making processes. Machine learning ensemble methods are chosen to estimate completion time (i.e., “Time To Complete”, TTC) for every (production) task and chain of tasks, thus “abnormal” task processing times would warn about possible failure state of the system. This is the primary phase of the service and its precision is crucial. The first implementation of such analytic service already includes Task TTC Estimator tool and is designed in a way to provide a comprehensive set of options to adjust the analysis process and possibility to extend its functionality.
        Speaker: Mr Mikhail Titov (National Research Centre «Kurchatov Institute»)
        Slides
      • 87
        IT for Applied Environmental Research in JINR
        The IT in the nuclear research has been focused mainly on mathematical modelling of the nuclear phenomena and on big data analyses. The applied nuclear sciences used for the environmental research brings in a different set of problems where information technologies may significantly improve the research. The ICP Vegetation is an international research program investigating the impacts of air pollutants on crops and (semi-) natural vegetation. Thirty-five parties participate in the program. One of the co-leading institutions of the program is the Frank Laboratory of Nuclear Physics (FLNP) of the JINR. In cooperation with the Laboratory of Information Technologies (LIT) of the JINR, the database system for terrain moss sample data collection and processing was developed. The goal of the research teams from the VŠB-TU Ostrava, the FLNP and the LIT is further development of the database system by adding new functions. These new functions should standardize analyses (statistical toolset) and visualization (GIS toolset) of the samples provided by all research teams.
        Speaker: Mr Petr Jancik (JINR; VSB - Technical University of Ostrava)
        Slides
    • 11:00
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Computations with Hybrid Systems (CPU, GPU, coprocessors) Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Dmitry Podgainy (JINR)
      • 88
        A new HPC Architecture Intel
        Speaker: Nikolai Mester (Intel)
        Slides
      • 89
        The development on HYBRILIT of the Machine-learning algorithms for identification and separation of the neutron and gamma-ray signals obtained from the DEMON detector
        We apply several machine-learning (ML) algorithms for identification and separation of the neutron and gamma-ray signals coming from the DEMON (DEtecteur MOdulaire de Neutrons) detector. The ML-predictions have been contrasted with the results obtained within a standard method based on an integral-area scheme. In the situations where the standard method fails a properly trained ML-algorithm provides more adequate predictions and, therefore, performs much better.
        Speakers: Dr Dmitry Podgainy (JINR), Dr Oksana Streltsova (JINR)
        Slides
      • 90
        Application of NVIDIA CUDA technology to calculation of ground states of few-body nuclei
        The modern parallel computing solutions were used to speed up the calculations by Feynman’s continual integrals method. The algorithm was implemented in C++ programming language. Calculations using NVIDIA CUDA technology were performed on the NVIDIA Tesla K40 accelerator installed within the heterogeneous cluster of the Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna. The results for energies of the ground states of several few-body nuclei demonstrate overall good agreement with experimental data. The obtained square modulus of the wave function of the ground states provided the possibility of investigating the spatial structure of the studied nuclei. The use of general-purpose computing on graphics processing units significantly (two orders of magnitude) increases the speed of calculations. This approach may be useful for investigation of any few-body system including few-quark systems and may serve as an addition to other well-known methods, e.g., Gaussian expansion method and hyperspherical-harmonics technique.
        Speaker: Prof. Viacheslav Samarin (Joint Institute for Nuclear Research, Flerov Laboratory of Nuclear Reactions)
        Slides
      • 91
        Automated system to monitor and predict matching of vocational education programs with labour market
        Interaction of labour market and educational system is a complex process, with many parties involved (government, universities, employers, individuals, etc.). Both horizontal and vertical mismatch between skills and qualifications from one side and market’s requirements from another are still widely observed in both developing and developed countries. To discover both qualitative and quantitative correlations between education system and labour market in a reasonable time, we proposed an intellectual system to monitor the demands of employers and match them with the educational standards and programs. The analysis is based on stringing together job requirements and single competencies from the educational standards, the lowest levels of the models of the labour market and the education system correspondingly. To automate the processing as muсh as possible, we used machine learning technologies for semantic parsing. Creation of semantic models is one of the well-known key problems of natural language processing. Since the wording in both requirements and competencies details usually consist of about 10 words, calculation of semantic distance between short sentences lies at the very core of the method. For our task, the way proposed was to deal with the vector representation of words and short sentences. Big Data approaches and technologies are in use for collecting and processing the data. The system being created allows to estimate a need for specific professions for regions, to consider matching of the professional standards with real market jobs, to plan the number of funded places in colleges and universities. Having historical data, it is possible to not only determine current expectations of labour market from the education system, but make some further predictions.
        Speaker: Sergey Belov (Joint Institute for Nuclear Research)
        Slides
      • 92
        Parallel framework for partial wave analysis for the BES-III experiment
        The partial wave analysis at the BES-III experiment is being done event-by-event using the maximum likelihood estimation, with the typical statistics of the order of 10 billion J/ψ events per year, resulting in huge computation times. On the other hand, the event-by-event analysis can be naturally parallelized. We developed the parallel cross-platform software architecture that can run calculations at various high-performance computing platforms, such as multi-core CPUs, Intel Xeon Phi co-processors, and GPUs. The software supports switching between different minimization algorithms like MINUIT or FUMILI. The wave functions are constructed using covariant tensor formalism. Currently analysis is developed for the J/ψ → K+K-π0 decay channel. The algorithm for caching the intermediate results has been developed, minimizing the amount of calculations performed in each iteration. Besides, a number of software optimizations has been used, including vectorization, memory access linearization, and data alignment. In future we plan adding the analysis for new reaction channels, and possibly adapting our software for use in other experiments.
        Speaker: Ms Victoria Tokareva (JINR)
    • Research Data Infrastructures@Computing for Large Scale Facilities Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      • 93
        SALSA - Scalable Adaptive Large Structures Analysis
        Data environments are growing exponentially and the complexity of data analysis is becoming critical issue. The goal of SALSA project is to provide tools to make connection between human and computer to understand and learn from each other. Analysis of different parameters in N-dimensional space should be made easy and intuitive. Task distribution system has to be adaptive to the enviroment where analysis is done and has provide easy access and interactivity to the user. SALSA contains distribution network system that can constructed at level of clusters, nodes, processes and threads and will be able to build any tree strucure. User interface is implemented as web service that can connect to SALSA network and distribute tasks to workers. Web application is using latest web techonlogies like Angular, WebSockets to provide interactivity and dynamism. JavaScript ROOT (JSROOT) package is used as analysis interface. EOS storage support with JSROOT is included to provide prossibility to browse files and view results on web browser. Users can create, delete, start and stop tasks. The web application has several templates for different types of user tasks that makes it possible to quickly create new task and submit it to the SALSA network.
        Speaker: Martin Vala (JINR)
        Paper
      • 94
        Metadata curation and integration in High Energy and Nuclear Physics
        Modern High Energy and Nuclear Physics experiments generate vast volumes of scientific data and metadata, describing scientific goals, the data provenance, conditions of the research environment, and other experiment-specific information. Data Knowledge Base (DKB) R&D project has been initially started in 2016 as a joint project of National Research Center “Kurchatov Institute” and Tomsk Polytechnic University. And later the interest from the ATLAS experiment at LHC guided it to the new area of studies. Within the project we studied metadata sources in ATLAS. There are many sources of metadata, such as physics topics metadata, papers and conference notes, supporting documents, Twiki pages, google documents and spreadsheets, data sample catalogs, conditions and production analysis system databases. It has been noticed that information between sources is loosely coupled. Therefore, to provide a holistic view on physics topics, including integrated representation of all ATLAS documents and corresponding data samples, scientists need to obtain cross relations among metadata by themselves. DKB is designed to provide metadata integration and is considered to look for cross references among the metadata from various data sources. For the end user DKB frontend will be implemented as a graphical user interface, providing convenient integrated metadata representation, navigation, and efficient search - upwards to common metadata (production campaigns, projects, physics groups) and downwards from the specific, fine-grained metadata objects (detector geometry version, software release, conditions tags). Currently, the data scheme of ATLAS integrated metadata is organized as an ontological model. The backend of the DKB is the OpenLink Virtuoso RDF storage. It is populated with the information from ATLAS publications, supporting documents and underlying data samples. Metadata from unstructured texts were extracted by the PDFAnalyzer utility, developed by the research team. The integration dataflow execution is automated by Apache Kafka Streams. We observed that Twiki pages are very popular in physics community and they contain the metadata, corresponding to physics topics and production campaigns in semi-structured form. It is natural to expand the DKB functionality by adding the analysis of the Twiki pages. This will allow to have more complete and accurate integrational data model. Implementation of the DKB is closely related to the data samples curation and discovery. To choose the most suitable method, providing a performant look up for data samples by the various combinations of parameters, we should evaluate different technologies, such as graph databases (Neo4j, OrientDB), ElasticSearch, Virtuoso, Oracle JSONs search. In our report we will summarize the current state of our project, technology evaluation results and the recent prototype of the DKB architecture.
        Speaker: Ms Maria Grigorieva (NRC KI)
        Slides
      • 95
        Data management in heterogeneous metadata storage and access infrastructures
        In modern times many large projects, sooner or later, have to face the problem of how to store, manage and access huge volumes of semi-structured and loosely connected data, namely project metadata -- information, required for monitoring and management of the project itself and its internal processes. The structure of the metadata evolves all the time to meet the needs of the monitoring tasks and user requirements. And as the structure and volume of the metadata grow, it becomes impractical to store everything in a single central storage -- with time such a storage becomes less flexible in structure, and query processing slows down. To provide structure flexibility and to keep metadata access time short enough for comfort interaction with monitoring systems, next step is to replace the single central storage with a number of task-specific storages -- one for active metadata, another for the archive, yet another to store aggregated information (as a cache storage), etc. In a broad sense the combination of these storages can be described as a single hybrid (or heterogeneous) metadata storage and access infrastructure. The main goal of this infrastructure is to provide information about the project and its internal processes in a human readable and searchable way. Among the possible components of this infrastructure can be text documents, wiki pages, databases, search interfaces to storage systems, etc. To keep all these components synchronized even in case of any -- software, hardware or network -- failure, there is a need of some supervising tool (or a set of tools), which is aware of the infrastructure and takes care of data consistency within it. The usual way is to create such a supervising tool individually for each case, meaning that each part of the infrastructure takes care of itself, synchronizing data only with the direct neighbors, namely the information sources for this part. And for each case one must solve same issues of reliability, throughput, scalability and fault tolerance. To avoid solving same issues individually for every new system operating with metadata, we started to design a unified way to develop and implement such a supervising tool. It would allow developers in every particular case implement only the case-specific modules, and rest the responsibility for common issues upon the common and ready-to-use tools. The first premise for this work appeared in 2014-2015, when we were working on the Metadata Hybrid Storage R&D project for PanDA, the workflow management system of ATLAS experiment on LHC, in NRC “Kurchatov Institute”. In this report we will explain the motivation of the problem, describe the principal architecture designed to address it and tell about the prototype system, developed and implemented for ATLAS Data Knowledge Base, the joint R&D project of NRC KI and Tomsk Polytechnic Institute, started in 2016. Also we will discuss our technology choice for the prototype, provide the performance and scalability test results and present our plans for the future.
        Speaker: Mrs Marina Golosova (National Research Center "Kurchatov Institute")
        Slides
      • 96
        Current status of the geometry database for the CBM experiment
        In this paper we present the current state of developments of the Geometry DB (Geometry Database) for the CBM experiment [1]. At the current moment, the CBM collaboration moves from the stage of prototypes research and tests to the detectors and their components production. A high level control for the manufacturing process is required because of the complexity and high price of the detector components. As a result, there is a need for the development of a database complex for the CBM experiment. In the paper [2] we briefly discussed a complex of Database Management Systems (DBMS) for the CBM collaboration and described a current status of its implementation. The DBMS structure was developed on the basis of databases usage at LHC and other high energy physics experiments [2]. Here we present the current state of developments of the Geometry DB (Geometry Database) for the CBM experiment. The Geometry DB supports the CBM geometry, which describes the CBM experimental setup at the detail level required for simulation of particles transport through the setup using GEANT3 [3]. On the basis of the requirements the Geometry Database [4] has been developed in frameworks of the PostgreSQL and SQLite DBMS. The main purpose of this database is to provide convenient tools for: 1) managing the geometry modules (MVD, STS, RICH, TRD, RPC, ECAL, PSD, Magnet, Beam Pipe); 2) assembling various versions of the CBM setup as a combination of geometry modules and additional files (Field, Materials); 3) providing support of various versions of the CBM setup. It's provided both GUI (Graphical User Interface) and API (Application Programming Interface) tools for CBM users of the Geometry Database. 1. Friman B. et al. Compressed Baryonic Matter in Laboratory Experiments // The CBM Physics Book— 2011. 2. E.P. Akishina, E.I. Alexandrov, I.N. Alexandrov, I.A. Filozova, V. Friese, V.V. Ivanov: Conceptual Consideration for CBM Databases, Communication of JINR, E10-2014-103, Dubna, 2014. 3. GEANT - Detector Description and Simulation Tool, CERN Program Library, Long Write-up, W5013 (1995). 4. User Requirements Document of the Geometry Database for the CBM experiment http://lt-jds.jinr.ru/record/69336?ln=en
        Speaker: Irina Filozova (JINR)
        Slides
    • Closing Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Dr Tadeusz Kurtyka (CERN)
      slides
    • 13:15
      LUNCH Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Innovative IT Education (Round Table) Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Prof. Yury Panebrattsev (JINR)
      • 97
        On the Way to Open Education
        The report focuses on new trends in education. The system of open education is the path to a single world educational space, which offers unique opportunities not only for new educational initiatives on a global scale, but also for modernization of existing educational institutions.
        Speaker: Mrs Oksana Kreider (Крейдер Оксана)
        Slides
      • 98
        Training of skilled personnel adaptive strategy for digital economy goals in the Dubna State University
        The report focuses on new trends in education in conditions of transition to the digital economy. The program of development of digital economy in Russia requires new approaches to training and the use of modern digital technologies. The training strategy in modern conditions will be presented on the example of the State University University «Dubna».
        Speaker: Mrs Evgenia Cheremisina (Черемисина Евгения)
        Slides
      • 99
        Embedding of containerization technology in the core of the virtual computer lab
        When training highly skilled IT professionals, it is an important challenge for the university to teach professional competencies to graduates that they will be able to use to successfully solve a broad range of substantive problems that arise at all stages of the lifecycle of corporate information systems. Such information systems in practice, as a rule, are used for enterprise management, workflow management in technological processes, IT infrastructure management, creating web-solutions for high availability, data collection, and data analysis and storage. It is obvious that for students to learn these professional competencies, they need to master a large amount of theoretical material and to carry out practical exercises and research on the development of modern information systems, their deployment and support, the effective implementation of solutions for problem-oriented tasks, etc. The virtual computer lab provides a set of software and hardware-based virtualization and containerization tools that enable the flexible and on-demand provision and use of computing resources in the form of "cloud" Internet services for carrying out research projects, resource-intensive computational calculations and tasks related to the development of complex corporate and other distributed information systems. The service also provides dedicated virtual servers for innovative projects that are carried out by students and staff at the Institute of System Analysis and Control. The introduction of containerization technology serves to improve the process of corporate information systems deployment and use in the training of IT professionals. Compared to classical virtualization, the underlying operating system kernel can be used for all containers. On the one hand, it introduces restrictions on the use of other operating systems while, on the other hand, it improves payload on the north of a similar configuration. This can be achieved due to the specifics of the containerization architecture, which we will examine on the example of Dockers. Docker uses a client-server architecture in which the Docker-client interacts with the Docker daemon, enabling the operations of creating and launching containers on the server and providing them to students. In general terms, a containerization system can be represented in the form of three key components: images, registries, and containers. Images represent read-only templates that contain an operating system based on the same kernel version as the host system with necessary pre-configured and adapted software. These images are created, modified if necessary and then used for generating individual solitary containers. The images are stored in the registry (the registry is a constituent of the storage and distribution of images) and formed on the basis of the curriculum of courses and laboratory work plans prepared by the teaching staff. However, public hubs containing a large collection of images created by independent enthusiasts can be used to download the required images. The containers per se are, in fact, similar to catalogs (directories) of an operating system, where all the changes made by the user and the system software in the course of work are stored. Each container is created from an image, providing the capacity to quickly create, start, stop, move, and delete, and is a safe sandbox for running applications, allowing the student to carry out any experiments without compromising the base operating system, while maintaining the highest level of performance. The virtual computer lab has helped us provide an optimal and sustainable technological, educational-organizational, scientific methodological, and regulatory-administrative environment for supporting innovative approaches to computer education. It promotes the integration of the scientific and educational potential of Dubna State University and the formation of industry and academic research partnerships with leading international companies that are potential employers of graduates of the Institute of System Analysis and Control.
        Speaker: Mrs Nadezhda Tokareva (State Dubna Univeristy)
        Slides
    • 16:00
      Coffee break Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
    • Innovative IT Education (Round Table) Conference Hall

      Conference Hall

      Montenegro, Budva, Becici

      Splendid Conference & SPA Resort, 85315 Becici, Montenegro Hotel Splendid
      Convener: Mrs Evgenia Cheremisina (Dubna International University of Nature, Society and Man. Institute of system analysis and management)
      • 101
        "Interactive Platform of Nuclear Experiment Modeling” as a multidisciplinary tool in the training of specialists in the fields of ICT and experimental nuclear physics.
        In this report, we would like to present a software and hardware complex used to training of university students for their further work in real physical experiments. Our educational tool “Virtual Laboratory of Nuclear Fission” consists of several complementary components: A) General view: – Key ideas in nuclear physics and nuclear structure, – Basic theoretical models of nuclei, – Introduction into instruments and methods for the study of radioactive decays, – Virtual practicum and real measurements. B) Specific tasks: – Physics of spontaneous fission, – Experimental studies of spontaneous fission, – Light Ions Spectrometer (LIS) spectrometer, – Measurements, – Data analysis. Use of this tool during the summer student practice will enable the professionals to be able to prepare experiments in a relatively short time and perform measurements simultaneously with data analysis. We plan to integrate this educational tool into the traditional educational process applying the blended learning model. Virtual labs based on real experimental data for development of skills and competences in nuclear physics experimental techniques. One of the main trends of the modern university education is the inclusion of experimental data and research methods into the educational process. It is crucial to ensure that university graduates are able to engage in research in modern scientific laboratories with relative ease. This project proposes to develop the educational model on the basis of the modern physical setup — Light Ion Spectrometer (LIS).This model will be developed in collaboration with the Flerov Laboratory of Nuclear Reactions. At this model students will be required to study the nuclear physics phenomenon such as spontaneous fission, which forms the basis for the studies of multi-body decay modes. A distinctive feature of this model is its relative “simplicity”, while it uses the most advanced radiation detectors, nuclear electronics and other equipment to make precise measurements. This allows the students in a relatively short training period to go through all the stages of preparation of the experimental setup in order to perform the experiment and obtain physical results. The following skills set students will acquire: – Spectrometry of alpha particles and heavy charged fragments with the help of modern semiconductor detectors (pin-diodes), – Learn techniques on the time-of-flight measurements using time registered detectors based on microchannel plates, – Data analysis from modern digitizers. Measurement of time-of-flight spectra with high precision and the study of plasma delays effects in the registration of fragments with high charge in the semiconductor detectors, – Processing of the experimental data and obtaining of the mass spectra of the fission fragments. The set of these virtual labs form the competencies that are necessary for students’ work in a modern experiment in the field of nuclear physics. “Interactive Platform of Nuclear Experiment Modelling” as a multidisciplinary tool in the training of specialists in the fields of ICT and experimental nuclear physics. A new approach for conceptualization and skills development in scientific and engineering project work is proposed. In this computer-based approach libraries of various components of nuclear physics experiment (radioactive sources, various types of detectors, instruments and components of nuclear electronics) are used. This is different from traditional labs with defined equipment and measurement methods at the beginning of work. One of the advantages of this computer-based approach is that students specializing in the field of experimental nuclear physics are able to assemble preferred virtual experimental setup using existing components of the libraries. Using high-level programming languages (C ++, C #, etc.) with the set of libraries students can develop new components of virtual experimental setups. This multidisciplinary tool has possibility to be used by a range of students from different scientific and engineering disciplines e.g. ICT specialists, engineers etc.
        Speakers: Ms Victoria Belaga (JINR), Prof. Yury Panebrattsev (JINR)
        Slides
      • 102
        Online courses and new educational programs to support research priorities within the subject-matter of JINR projects on the basis of modern educational platforms
        Rapid development of information and communication technologies and widespread use of the Internet have led to a qualitative change in the educational technologies used around the world. The most popular form of training nowadays is blended learning, when a full-time educational process is complemented with computer-learning tools: online courses, interactive practicums and laboratory works, computer modelling tools and simulators. Solving the problem of integration of education and science presupposes establishing an efficient and sustainable interaction of universities with research centers and institutes. The main mission of the JINR laboratories is generating new knowledge. To develop successfully, the laboratories need to attract talented young people and highly qualified professionals to work at JINR. Moreover, there are highly skilled specialists currently working at JINR, who could share their knowledge through online courses intended for students from various universities of Russia and the Member States. On the other hand, the universities are interested in training highly qualified specialists to work on scientific projects. For this purpose, it is necessary to solve the problem of cooperation between universities and research centers aimed at coordinating the relevant educational programs, as well as to integrate the results of modern experiments in the educational process in the form of special courses and electives, or as independent course units of the basic disciplines. In the past four years, the most popular new educational technologies frequently used in the further educational process are massive open online courses (MOOCs). The world’s leading American and European universities – Harvard University, Massachusetts Institute of Technology, Stanford University, etc. – have adopted these technologies. Today, the most extensive MOOC platforms are being developed by the US universities: more than 10 million users in edX, more than 4 million – in Udacity, and more than 23 million – in Coursera. In our country there are also several platforms offering open educational resources: "Open Education", "Universarium", "Lectorium". However, when it comes to training specialists to work in the projects on the top-priority research fields and MEGASCIENCE projects, it is worth listing a number of problems that cannot be solved by the existing MOOC model: • Survey of employers’ (research centers’) opinion in order to generate a list of positions for which young specialists are to be engaged and an appropriate set of skills that the applicants should master • Generation of a list of courses to be studied and educational content, taking into account the requirements agreed both by teachers and employers – scientists and engineers • Development of educational materials based on the knowledge of experts working directly in the field of interest and not teaching on a regular basis • Development of training courses based on the results of individual research groups and experiments • Prompt adaptation of teaching materials and practical tasks as a response to the rapidly changing technologies • Search for subjects for scientific and engineering study and potential scientific supervisors as early as at the stage of doing relevant online-courses • Formation of the employer's opinion based on the results of student's online-learning for continuation of their career in the research center (in the experiment). To solve the above-mentioned problems, it is proposed to develop the open educational environment to support research priorities within the subject-matter of JINR projects in cooperation with NRNU MEPhI, Dubna University, Kazan Federal University, St. Petersburg University, the universities of the Member States and associated members, and others. The JINR University Center (UC) can act as an integrator of such cooperation, as the UC includes JINR-based departments of the leading Russian universities. On the basis of the UC, together with JINR scientists and engineers, as well as specialists from the universities participating in the project, the open educational environment can be created. The use of blended learning, when a full-time educational process is complemented with e learning tools, solves several important problems: • Lack of teaching specialists at universities; • Need for allocation of greater financial resources to provide transportation and social infrastructure for many students from the Member States, which is required for their long-term training at the JINR basic facilities • The developed online courses will allow forming a network of educational programs for joint training of master students, with the participation of JINR Member States universities. The courses will be developed in the MOOC (Massive Open Online Courses) format and made available on the corresponding open-source platforms. Specialists from JINR in collaboration with have experience in creating online courses for Coursera and edX. Nowadays students are enrolled in courses: “Elements of nuclear and atomic physics”, “Heavy ions physics” etc. The entire course development process was mastered, including the stage of pedagogical design, the stage of preparation of educational content, the stage of placement of the course on the educational platform and the stage of support. It is suggested to start the development with a training course for 1-year master students. The testing of the system can be carried out, for example, on the basis of master's programs of ISAM: 27.04.03-2 "System analysis of design and technological solutions". Currently, the following online work courses is proceeding: • "Modern problems in system analysis and management" (prof. E.N. Cheremisina) • "Distributed and cloud computing" (prof. V.V. Korenkov) • "Big data analytics" (prof. P.V. Zrelov) In the development of courses modern technologies of dynamic interactive 2D and 3D web-graphics is used. The insurance of individual components compatibility of the open educational environment and creation of opportunities for their multiple usages will be provided by conforming to the international standards defining the requirements for educational content. In developing check and reference materials an LTI (Learning Tools Interoperability) specification will be used. It contains recommendations for the structure and rules of development of external educational applications for their further integration with a variety of learning management systems (LMS).
        Speaker: Ms Victoria Belaga (JINR)
        Slides
      • 103
        Educational Support of NICA Project
        Educational support of the megaproject NICA is aimed at attracting public attention (school and university students and generally interested audience) to the scientific achievements of JINR and also training specialists to work at the accelerating complex NICA in the mid-term and long-term perspective. It is also necessary to include scientific and applied results obtained at NICA in the educational programs of undergraduate and postgraduate education. The expected scientific results obtained at the NICA collider will undoubtedly broaden the horizons of the world’s knowledge about the structure and evolution of matter at the early stage of the Universe evolution and, in the light of experimental data, will allow one to answer the actual questions of the modern science, for example about nucleon spin nature and spin structure of the lightest nucleus – deuterium – at small distances. Such scientific findings and technological solutions should be accompanied by educational, popular-science and outreach projects intended for a wider audience, including school students. In the future, it will allow us to overcome a serious social problem – decline in young people's interest in scientific research and engineering professions. As a first step there has been made the open lesson for school students on the theme: “NICA. Universe in the Lab”. In this video, Academician Grigory Vladimirovich Trubnikov speaks about the research that scientists from different countries will carry out at the NICA accelerator complex. Creation of a modern educational environment of continuous learning and training of highly qualified personnel in the framework of the mega-project “NICA complex” requires development of online courses within the NICA project subject-matter, for example, about basics of accelerator equipment, experimental methods of nuclear physics, introduction to the physics of relativistic nuclear collisions, electronics for physics experiment etc. The special attention will be paid to the development and promotion of the specialized site of the NICA project which will include as the actual project information as educational materials within the subject-matter of the NICA project for students and young scientists. Using modern technologies of 3D-modeling and scientific data visualization will enable the development of educational resources of NICA at the level of the world's leading research centers. The interactive map of the NICA complex was created and will be updated during the development of the complex. Interactive map allows you to learn the setups of the collider. Complete modules of the complex shot on video, in order to demonstrate the current construction process. The modules, which are at the stage of development now demonstrated as a 3-D graphics that reveal the device itself and explain it's working principle. For each node of the complex we are expecting to make both video and graphic materials. Building brand awareness of JINR and NICA for a wider audience is one of the most important tasks. A good solution to this problem is creation of multimedia exhibits associated with the JINR research topics and participation in a variety of Russian and international exhibitions, days of science, museum exhibitions.
        Speaker: Prof. Yury Panebrattsev (JINR)
        Slides
      • 104
        Specialists - Electronic Training for «Nica» Program
        The report given investigates the influence of Bologna Process on Russian System of Higher Professional Education. Changes are presented of both the format and content of Higher Education, made due to the European Educational format. Analysis of bachalors and masters students training is done at the State Dubna University for «Nica» Program. The Key-point of educational training Program is made on individualized teaching approach, located at the Base Department Laboratory of Physics High Energy (LPHE). The perspective of Electronic Training following individualized educational Routes is made, combined with sound Professional training.
        Speaker: Dr Iurii Sakharov (Dubna International University for Nature,Society and Man)
        Slides
      • 105
        Cybersecurity of Internet of Things - Risks and Opportunities
        The Internet of Things (IoT) is developing at a tremendous rate. It is a combination of devices connected via the Internet and other networks, which are capable of receiving information from the outside world, analyzing it and, if necessary, managing external devices as well as provide information for decision-making. The goal is to create a more comfortable, safer and more efficient environment for both personal and public life. But like any rapidly evolving Internet technology, there are increasing risks from the point of view of cybersecurity. The most significant cyber-incidents in the world of IoT, the reasons for the occurrence of such cases and possible ways to improve the situation with cybersecurity of the IoT are considered.
        Speaker: Dr Alexandre Karlov (JINR)
        Slides
      • 106
        Information Disclosure in Software Intensive Systems
        Research and investigations on computer security problems show that the most malicious problem is the information disclosure. Today this problem is enormous in the context of the new cloud services. The paper is an overview of the main computer security components: attacks, vulnerabilities and weaknesses with a focus on the last ones. An approach to information disclosure weaknesses formalization and its usage for automated weakness’ discovery are discussed.
        Speaker: Prof. Vladimir Dimitrov (University of Sofia)
        Slides