9th International Conference "Distributed Computing and Grid Technologies in Science and Education" (GRID'2021)

Europe/Moscow
Description

GRID'2021 will take place on 5-9 July 2021

The International Conference "Distributed Computing and Grid-technologies in Science and Education" will be held in a MIXED format at the Meshcheryakov Laboratory of Information Technologies (MLIT) of the Joint Institute for Nuclear Research (JINR).

During the conference, there will be held the Round table "Modern IT technologies and education".


July 6 Thusday 2021

at 10:10 will be Group Photo near Conference Hall


Wednesday 7 July

Boat trip

Boarding starts at 15:30
Departure at 16:00
From the pier on the Komsomolskaya embankment


!!! To participate in the conference in person, you must carry documents confirming that you have been vaccinated with the first component or a single-component vaccine against COVID-19 (certificate of vaccination), or that you have a negative PCR test result, recorded no later than three calendar days ago, or immunoglobulin G (IgG ≥ 20), recorded no later than three calendar months ago.


All events within the conference will be held in compliance with the sanitary and epidemiological requirements to prevent the spread of the coronavirus infection COVID-19.

Conference Topics:

  1. Distributed computing systems – technologies; architectures; models; operation and optimization; middleware and services.
  2. Research infrastructure – networking; computing centre infrastructure; facility integration of heterogeneous resources; monitoring; decision support and management tools.
  3. Computing for MegaScience Projects (LHC, NICA, FAIR, SKA, PIC, XFEL,ELI, etc.)
  4. Distributed computing applications – in science; in education; in industry and business.
  5. HPC – supercomputers; CPU architectures; GPU; FPGA; HPC applications.
  6. Data Management, Organization and Access – databases; distributed storage systems; Datalakes.
  7. Virtualization – cloud computing; virtual machines; container technologies.
  8. Quantum information processing - Quantum machine learning; Quantum computing for HEP; Quantum communication; Quantum internet; Simulation of quantum information processing. 
  9. Big data Analytics and Machine learning.
  10. Distributed computing, HPC and ML for solving applied tasks.

Conference languages - Russian and English.


Contacts:
Address:   141980, Russia, Moscow region, Dubna, Joliot Curie Street, 6
Phone:      (7 496 21) 64019, 63012

E-mail:      grid2021@jinr.ru
URL:         http://grid2021.jinr.ru/
 

IBSPlatformix Softline ITCost INTEL
NIAGARA Supermicro DELL rscgroup

 

Participants
  • Adam Kisiel
  • Ahmed Elaraby
  • Aleksander Kokorev
  • Aleksander Makhalkin
  • Aleksandr Alekseev
  • Aleksandr Baranov
  • Aleksandr Dik
  • Aleksandr Klochkov
  • Aleksandr Malko
  • Aleksei Golunov
  • Aleksey Bondyakov
  • Aleksey Fedorov
  • Alexander Bogdanov
  • Alexander Degtyarev
  • Alexander Krylov
  • Alexander Kryukov
  • Alexander Kurepin
  • Alexander Moskovsky
  • Alexander Uzhinskiy
  • Alexei Uteshev
  • Alexei Zverev
  • Alexey Abramov
  • Alexey Artamonov
  • Alexey Stadnik
  • Alexey Stankus
  • Alexey Zhemchugov
  • Alikhan Urumov
  • Alla Shevchenko
  • Anar Faradzhov
  • Anastasiia Nikolskaia
  • Andrei Tsaregorodtsev
  • Andrey Baginyan
  • Andrey Chepurnov
  • Andrey Demichev
  • Andrey Iachmenev
  • Andrey Kiryanov
  • Andrey Kondratyev
  • Andrey Nechaevskiy
  • Andrey Shevchenko
  • Andrey Ulyanov
  • Andrey Zarochentsev
  • Anna Shaleva
  • Anton Brekhov
  • Anton Yudin
  • Artashes Mirzoyan
  • Artem Petrosyan
  • Arutyun Avetisyan
  • Astghik Torosyan
  • Cristian Calude
  • Danila Oleynik
  • Daria Priakhina
  • Dastan Ibadullayev
  • David Minasyan
  • David Satseradze
  • Daviti Goderidze
  • Denis Egorov
  • Dmitri Portnov
  • Dmitrii Marov
  • Dmitrii Tereshchenko
  • Dmitriy Garanov
  • Dmitriy Gavrilov
  • Dmitriy Maximov
  • Dmitriy Scherbakov
  • Dmitry Grin
  • Dmitry Kulyabov
  • Dmitry Podgainy
  • Dmitry Wiens
  • Dmitry Yermak
  • Dominik Matis
  • E.Yu. Shchetinin
  • Eduard Nikonov
  • Egor Budlov
  • Egor Shchavelev
  • Ekaterina Kotkova
  • Ekaterina Krivchun
  • Ekaterina Pavlova
  • Ekaterina Polegaeva
  • Ekaterina Rezvaya
  • Ekaterina Voytishina
  • Elena Kirpicheva
  • Elena Nurmatova
  • Elizaveta Cherepanova
  • Evgenia Cheremisina
  • Evgeniy Kuzin
  • Evgeny Alexandrov
  • Evgeny Perepelkin
  • Fedor Bukreev
  • Fedor Prokoshin
  • Florian Rehm
  • Gennady Ososkov
  • Gheorghe Adam
  • Giorgia Miniello
  • Gleb Mozhaiskii
  • Grigore Secrieru
  • Grigoriy Krasov
  • Igor Alexandrov
  • Igor Chernykh
  • Igor Pelevanyuk
  • Igor Semenov
  • Igor Sokolov
  • Ilya Gorbunov
  • Ilya Kalagin
  • Ilya Kurochkin
  • Ilya Pavlov
  • Ilya TRIFALENKOV
  • Ilya Tsvetkov
  • Irina Enyagina
  • Irina Filozova
  • Irina Nikolaeva
  • IULIIA GAVRILENKO
  • Ivan Gankevich
  • Ivan Hristov
  • Ivan Kadochnikov
  • Ivan Kashunin
  • Ivan Matveev
  • Ivan Petriakov
  • Ivan Slepov
  • Ivan Sokolov
  • Jan Bitta
  • Jasur Kiyamov
  • Joanna Waczyńska
  • Kamil Bilyatdinov
  • Kerstin Borras
  • Konstantin Gertsenberger
  • Leonid Sevastianov
  • Lev Shchur
  • Liliia Ziganurova
  • Mahdi Rezaei
  • Margarit Kirakosyan
  • Margarita Stepanova
  • Maria Dima
  • Maria Grigorieva
  • Maria Mingazova
  • Marina Cherkasskaya
  • Martin Bures
  • Martin Fekete
  • Martin Vala
  • Maxim Zuev
  • Mihai Dima
  • Mikhail Belov
  • Mikhail Bich
  • Mikhail Matveyev
  • Mikhail Mineev
  • Nadezhda Shchegoleva
  • Nadezhda Tokareva
  • Natalia Gromova
  • Natalia Nikitina
  • Nataliia Kulabukhova
  • Nelli Pukhaeva
  • Nicolay Luchinin
  • Nikita Balashov
  • Nikita Stepanov
  • Nikita Tsegelnik
  • Nikolay Khrapov
  • Nikolay Kutovskiy
  • Nikolay Mester
  • Nikolay Voytishin
  • Nugzar Makhaldiani
  • Nurzada Saktaganov
  • Oksana Kreider
  • Oksana Streltsova
  • Oleg Iakushkin
  • Oleg Rogachevskiy
  • Oleg Semenov
  • Oleg Shavykin
  • Oleg Sukhoroslov
  • Olga Derenovskaya
  • Olga Ivancova
  • Oxana Smirnova
  • Pavel Kisel
  • Pawel Lula
  • Peter Klimai
  • Petr Jancik
  • Qiulan Huang
  • Rimma Polyakova
  • Roman Rodamenko
  • Ruslan Kuchumov
  • Sanda Adam
  • Sergei Shmatov
  • Sergey Belov
  • Sergey Mikheev
  • Sergey Shorokhov
  • Sergey Smirnov
  • Sergey Ulyanov
  • Sergey Valentey
  • Sergey Volkov
  • Sergey Vostokin
  • Simone Campana
  • Slavomir Hnatic
  • Snezhana Potemkina
  • Sofia Vallecorsa
  • Stanislav Grishko
  • Stanislav Polyakov
  • Svetlana Pitina
  • Tao Lin
  • Tatiana Sapozhnikova
  • Tatiana Strizh
  • Tatiana Zaikina
  • Tatyana Solovieva
  • Tigran Mkrtchyan
  • Timofey Koptyaev
  • Vahag Bejanyan
  • Vahagn Abgaryan
  • Valery Egorshev
  • Valery Grishkin
  • Vasiliy Velikhov
  • Vasily Golubev
  • VIACHESLAV ILIIN
  • Victor Lakhno
  • Victor Tsvetkov
  • Victoria Ezhova
  • Viktor Kotliar
  • Vitaliy Tarabrin
  • Vitaly Yermolchyk
  • Vladimir Korenkov
  • Vladimir Korkhov
  • Vladimir Mossolov
  • Vladimir Stegailov
  • Vladimir Sudakov
  • Vladimir Trofimov
  • Vladimir Voevodin
  • Vladislav Furgailo
  • Vladislav Kashansky
  • Vladislav Svozilík
  • Vsevolod Nikolskiy
  • Vsevolod Trifalenkov
  • Weidong Li
  • Xingtao Huang
  • Yea Rem Choi
  • Yelena Mazhitova
  • Yulia Dubenskaya
  • Yulia Lavdina
  • Yuri Butenko
  • Yuriy Matyushin
  • Zhanara Unbaeva
  • Zurab Modebadze
  • Štefan Korečko
  • Виктор Гергель
  • Денис Зубов
  • Елена Ясиновская
  • Павел Глуховцев
  • Сергей Монин
    • 09:00 10:00
      Registration at the MLIT Conference Hall 1h Conference Hall/ Organizing Committee Room

      Conference Hall/ Organizing Committee Room

      https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095
    • 10:00 10:30
      Opening session Conference Hall

      Conference Hall

      Conference Hall, 5th floor
    • 10:30 11:10
      Plenary reports Conference Hall

      Conference Hall

      • 10:30
        JINR's strategic plan for long-term development 40m

        -

        Speaker: Dr Grigory Trubnikov (JINR)
    • 11:10 11:30
      Coffee 20m
    • 11:30 13:00
      Plenary reports Conference Hall

      Conference Hall

      • 11:30
        Information Technologies @ JINR development strategy 30m Conference Hall

        Conference Hall

        -

        Speaker: Vladimir Korenkov (JINR)
      • 12:00
        System programming and cybersecurity 1h Conference Hall

        Conference Hall

        Speaker: Arutyun Avetisyan (ISP RAS)
    • 13:00 14:00
      Lunch 1h
    • 14:00 15:00
      Plenary reports Conference Hall

      Conference Hall

      Conference Hall, 5th floor
      • 14:00
        Distributed scientific computing challenges and outlook 1h

        -

        Speaker: Oxana Smirnova (Lund University)
    • 15:00 15:30
      Coffee 30m
    • 15:30 17:00
      Big data Analytics and Machine learning. 407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      • 15:30
        Can I protect my face image from recognition? 15m

        The "Fawkes" procedure is discussed as a method of protection against unauthorized use and recognition of facial images from social networks. As an example, the results of an experiment are given, confirming the fact of a low result of face image recognition within CNN, when the "Fawkes" procedure is applied with the parameter mode = "high". Based on a comparative analysis with the original images of faces, textural changes and graphical features of the structural destruction of images subjected to the Fawkes procedure are shown. In addition to this analysis, multilevel parametric estimates of these destructions are given and, on their basis, the reason for the impossibility of recognizing images of faces subjected to the Fawkes procedure, as well as their use in deep learning problems, is explained. The structural similarity index (ISSIM) and phase correlation of images are used as quantitative assessment tools.

        Speaker: Nadezhda Shchegoleva (Saint Petersburg Electrotechnical University "LETI")
      • 15:45
        On methods of the transfer learning in the classification of the biomedical images 15m

        In this paper, computer studies of the effectiveness of the use of transfer learning methods for solving the problem of recognizing human brain tumors based on its MRI images are carried out. The deep convolutional networks VGG-16, ResNet-50, Inception_v3, and MobileNet_v2 were used as the basic models. Based on them, various strategies for training and fine-tuning models for recognizing brain tumors on a data set are implemented... Analysis of their performance indicators showed that the strategy for fine-tuning the ResNet50 model on an extended data set brought higher accuracy values, F1-metrics compared to other basic models. The best classification quality is achieved with transfer training on the VG 16 model with an accuracy of 95%.

        Speaker: Prof. EUGENE SHCHETININ (Financial University under the Government of Russian the Federation)
      • 16:00
        Deep Learning Application for Image Enhancement 15m

        Recently, deep learning has obtained a central position toward our daily life automation and delivered considerable improvements as compared to traditional algorithms of machine learning. Enhancing of image quality is a fundamental image processing task and. A high-quality image is always expected in several tasks of vision, and degradations like noise, blur, and low-resolution, are required to be removed. The deep techniques approaches can significantly and substantially boost performance compared with classical ones. One of the main research areas where deep learning can make a major impact is imaging. This work presents a survey of deep learning on image enhancement and describes its potential for future research.

        Speaker: Ahmed Elaraby (South Valley University, Egypt)
      • 16:15
        Architecture of a generative adversarial network and preparation of input data for modeling gamma event images for the TAIGA-IACT experiment 15m

        Very-high-energy gamma ray photons interact with the atmosphere to give rise to cascades of secondary particles - Extensive Air Showers (EASs), which in turn generate very short flashes of Cherenkov radiation. This flashes are detected on the ground with Imaging Air Cherenkov Telescopes (IACTs). In the TAIGA project, in addition to images directly detected and recorded by the experimental facilities, images obtained as a result of simulation are used extensively. The problem is that the computational models of the underlying physical processes (such as interactions and decays of a cascade of charged particles in the atmosphere) are very resource intensive, since they track the type, energy, position, direction and time of arrival of all secondary particles born in EAS. On average, using such computational methods, one can get only about 1000 images per hour. This can result in computational bottleneck for the experiment due to the lack of model data. To address this challenge, we applied a machine learning technique called Generative Adversarial Networks (GAN) to quickly generate images of gamma events for the TAIGA project. The initial analysis of the generated images showed the applicability of the method, but revealed some features that require special preparation of the input data. In particular, it was important to teach the network that in our case gamma images are elliptical, and the angle between the image axis and the direction to gamma-ray source is close to zero. In this article we provide an example of a GAN architecture suitable for generating images of gamma events similar to those obtained from IACTs of the TAIGA project. Testing the results using third-party software showed that more than 95% of the generated images were found to be correct. And at the same time, the generation is quite fast: after training, the generation of 4000 events takes about 10 seconds. In the article, we also discuss the possibility of improving the generated images by preprocessing the input data.

        Speaker: Yulia Dubenskaya (SINP MSU)
      • 16:30
        The use of convolutional neural networks for processing stereoscopic IACT images in the TAIGA experiment 15m

        Machine learning methods including convolutional neural networks
        (CNNs) have been successfully applied to the analysis of extensive air
        shower images from imaging atmospheric Cherenkov telescopes (IACTs).
        In the case of the TAIGA experiment, we previously demonstrated that
        both quality of selection of gamma ray events and accuracy of
        estimates of the gamma ray energy by CNNs are good compared to the
        conventional Hillas approach. These CNNs used images from a single
        telescope as input. In our present work we demonstrate that adding
        data from another telescope results in higher accuracy of the energy
        estimates and quality of selection. The same approach can be used for
        arbitrary number of IACTs. All the results have been obtained with the
        simulated images generated by TAIGA Monte Carlo software.

        Keywords
        deep learning; convolutional neural networks; gamma astronomy;
        extensive air shower; TAIGA; stereoscopic mode

        Speaker: Stanislav Polyakov (SINP MSU)
      • 16:45
        Machine Learning for Data Quality Monitoring at CMS Experiment 15m

        We give an overview of the CMS experiment activities to apply Machine Learning (ML) techniques to Data Quality Monitoring (DQM).
        In the talk special attention will be paid to ML for Muon System and muon physics object DQM. ML application for data certification (anomaly detection) and release validation will be discussed.

        Speaker: Ilya Gorbunov (JINR)
    • 15:30 17:00
      Computing for MegaScience Projects 310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      • 15:30
        The ATLAS EventIndex using the HBase/Phoenix storage solution 15m

        The ATLAS EventIndex provides a global event catalogue and event-level metadata for ATLAS analysis groups and users. The LHC Run 3, starting in 2022, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This talk describes the implementation of a new core storage service that will provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, which allows the re-use of most of the existing code for metadata integration.

        Speaker: Elizaveta Cherepanova (Laboratory of Nuclear Problems)
      • 15:45
        Performance testing framework for the ATLAS EventIndex 15m

        The ATLAS EventIndex is going to be upgraded in advance of LHC Run 3. A framework for testing the performance of both the existing system and the new system has been developed. It generates various queries (event lookup, trigger searches, etc.) on sets of the EventIndex data and measures the response times. Studies of the response time dependence on the amount of requested data, and data sample type and size, can be performed. Performance tests run regularly on the existing EventIndex and will run on the new system when ready. The results of the regular tests are displayed on the monitoring dashboards, and they can raise alarms in case (part of) the system misbehaves or becomes unresponsive.

        Speaker: Elizaveta Cherepanova (Laboratory of Nuclear Problems)
      • 16:00
        Development of the ATLAS Event Picking Server 15m

        During LHC Run 2, the ATLAS experiment collected almost 20 billion real data events and produced about three times more simulated events. During physics analysis it is often necessary to retrieve one or a few events to inspect their properties in detail and check their reconstruction parameters. Occasionally it is also necessary to select larger samples of events in RAW format to reconstruct them with enhanced code. The new Event Picking Server automates the procedure of finding the location of the events using the EventIndex and submitting the Grid jobs to retrieve the individual events from the files in which they are stored.

        Speaker: Evgeny Alexandrov (JINR)
      • 16:15
        Computing environment for the Super-Charm-Tau factory detector project 15m

        The project of the Super Charm-Tau (SCT) factory --- a high-luminosity
        electron-positron collider for studying charmed hadrons and tau lepton
        --- is proposed by Budker INP. The project implies single collision point
        equipped with a universal particle detector. The Aurora software
        framework has been developed for the SCT detector. It is based on
        trusted and widely used in high energy physics software packages, such
        as Gaudi, Geant4, and ROOT. At the same time, new ideas and
        developments are employed, in particular the Aurora project benefits a
        lot from the turnkey software for future colliders (Key4HEP)
        initiative. We will present the first release of the Aurora
        framework and its core technologies, structure and roadmap for
        the near future. From the hardware point of view the Budker INP general computing facility (BINP/GCF) providing the required computational and storage
        resources will be described together with recent developments for the fullscale offline computing infrastructure.

        Speaker: Dmitriy Maximov (Budker Institute of Nuclear Physics)
      • 16:30
        Simulation Model of an HPC System for Super Charm-Tau Factory 15m

        This work describes the design of a digital model of an HPC system for processing data from the Super Charm-Tau factory electron-positron collider of the "megascience" class. This model is developed using the AGNES multiagent modeling platform. The model includes intelligent agents that mimic the behavior of the main subsystems of the supercomputer, such as a task scheduler, computing clusters, data storage system, etc. Using simulation modeling allows for the maximally reliable representation of the exact characteristics and volume of the needed equipment for developing the desired HPC system. The simulation model accounts for all the aspects of operation of this system from parallel data storage system to arrangement of the parallel launch of tasks. The developed system for processing software errors and equipment failures, as well as the system for ensuring energy efficiency make it possible to estimate the needed equipment with account for all possible emergency situations. This model allows calculating the parameters of the computing system necessary for processing and storing the results of operation of the Super Charm-Tau factory after its commissioning.

        Speaker: Dmitry Wiens (ICMMG SB RAS)
      • 16:45
        Participation of Russian institutes in the processing and storage of ALICE data 15m

        The report presents the results of the work of Russian institutes in the processing of ALICE experiment data during the last 3 years of the operation of the Large Hadron Collider (LHC) including the end of the LHC RUN2 and the1st year of the COVID-19 pandemic. The main problems and tasks facing both ALICE Grid Computing and its Russian segment before the LHC RUN3, including the problems of support and modernization of existing resources are considered. Also, plans for the preparation for the operation of the LHC in the HL (High luminosity) mode are presented.

        Speaker: Andrey Zarochentsev (SPbSU)
    • 15:30 17:00
      Distributed computing applications 403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      • 15:30
        Программный интерфейс для функционального программирования для параллельных и распределенных систем 15m

        Существует огромное количество научных и коммерческих приложений, написанных с прицелом на последовательное исполнение. Запуск таких программ на многопроцессорных системах возможен, но без использования преимуществ этих систем. Для выполнения программы с учетом этих возможностей зачастую необходимо переписать программу. Однако, это не всегда оптимальный выбор. В этой работе рассматривается возможность параллельного выполнения программ, написанных на функциональных языках, подробно описывается принцип работы предложенного интерпретатора функционального языка программирования. В качестве примера была выбрана реализация языка Scheme – Guile. Параллелизм в нем достигается за счет параллельного выполнения аргументов функции. Результат данной работы может быть использован как пример построения интерфейсов для других языков программирования.

        Speaker: Ivan Petriakov (Saint Petersburg State University)
      • 15:45
        An intelligent environmental monitoring platform 15m

        Air pollution has a significant impact on human and environmental health. The aim of the UNECE International Cooperative Program (ICP) Vegetation in the framework of the United Nations Convention on Long-Range Transboundary Air Pollution (CLRTAP) is to identify the main polluted areas of Europe, produce regional maps and further develop the understanding of the long-range transboundary pollution. The program is realized in 43 countries of Europe and Asia. Mosses are collected at thousands of sites. The data management system (DMS) development for the ICP Vegetation program was initiated in 2016 in the Laboratory of Information technologies. The DMS evaluates and now offers good options in simplification and automation of the environmental monitoring process. We are using some powerful technologies to provide a new level of services for ICP Vegetation participants. The platform has some interesting analytical, classification, and prediction abilities. The current architecture, workflow, and principles of data processing and analysis will be presented.

        Speaker: Alexander Uzhinskiy (Dr.)
      • 16:00
        The technology and Tools for the Building of Information Exchange Package Based on Semantic Domain Model 15m

        This paper presents the technology developed by the authors to improve the semantic interoperability of heterogeneous systems exchanging information through an object-oriented bus. We demonstrate the solution that allows semantically map the models of interacting information systems with a unified data model (domain otology) when developing an information exchange package.

        Speaker: Elena Yasinovskaya (Plekhanov Russian University of Economics)
      • 16:15
        Development of dashboards for the workflow management system in the ATLAS experiment 15m

        The UMA software stack developed by the CERN-IT Monit group provides the main repository of monitoring dashboards. The adaptation of this stack to the ATLAS experiment began in 2018 to replace the old monitoring system. Since then, many improvements and fixes have been implemented to the UMA. One of the most considerable enhancements was the migration of the storage for aggregated data from InfluxDB to ElasticSearch, which significantly reduced the execution time of long time range selection queries. Many dashboards were created and updated in Grafana for various user groups and use cases to monitor the workflow management system and computing infrastructure. “Jobs accounting”, “Jobs monitoring”, “Site-oriented” and “HS06 reports” are examples of handy dashboards which are regularly utilized by ATLAS users. This presentation is dedicated to the overview of the jobs dashboards in the ATLAS experiment.

        Speaker: Aleksandr Alekseev (National Research Tomsk Polytechnic University)
      • 16:30
        Lifecycle Management Service for the compute nodes of Tier1, Tier2 sites (JINR) 15m

        Megascience experiments, such as CMS, ATLAS, ALICE, MPD, BM@N, etc., are served at the Meshcheryakov Laboratory of Information Technologies (MLIT) of the Joint Institute for Nuclear Research (JINR) using the available computing infrastructure. To ensure the guaranteed and stable operation of the infrastructure under constant load conditions, the centralized and timely maintenance of software and the rapid introduction of new compute nodes are required. As a solution to this task, a service was created; its purpose is to automate the process related to the software maintenance and commissioning of compute nodes.
        The report will give an overview of the service (LCMS) and its components: centralized configuration management of the operating system and programs installed on the compute nodes of the Tier1, Tier2 sites (JINR); continuous integration engine for the automatic validation and loading of puppet manifests from the software repository; service (LCMS) component performance and status monitoring; mechanism for detecting, viewing and comparing security compliance.

        Speaker: Alexandr Baranov ((JINR))
      • 16:45
        WALT Platform for Web Application Development 15m

        At the moment, there are many different platforms for web-applications developing: Django, ASP.NET Core, Express, Angular, etc. Usually, these platforms assume a division of labour when a relatively large group of developers are working on a project, each of whom is engaged in its part
        (design, layout, front-end, back-end).
        In our real life, usually, only 1-2 people (full-stack developers) participate in the development of an application. The platform WALT (Web Application Lego Toolkit) presented in the report, in our opinion, is well suited for the development of web applications by a small group. WALT is a simple template language plus an interpreter for that language. The report overviews WALT architecture, its template language, and a list of corporate web applications at JINR developed using WALT.

        Speaker: Ivan Sokolov (Alexandrovich)
    • 15:30 17:00
      Distributed computing systems Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      • 15:30
        IHEP tier-2 computing center: status and operation 15m

        RU-Protvino-IHEP site is the one of three biggest WLCG Tier-2 centers in Russia. The computing infrastructure serves for "big four" LHC high energy physics experiments such as Atlas, Alice, CMS, LHCb and local experiments at IHEP such as OKA, BEC, radio biology stands and others. In this work the current status of the computing capacities, networking and engineering infrastructure is shown as well as the contribution of the grid site to the collaboration experiments.

        Speaker: Viktor Kotliar (IHEP)
      • 15:45
        INP BSU grid site 15m

        Current status of the INP BSU grid site. An overview of INP BSU computational facilities usage and cloud resources integration with JINR cloud is presented.

        Speaker: Dmitry Yermak (Institute for Nuclear Problems of Belarusian State University)
      • 16:00
        Research Cloud Computing Ecosystem in Armenia 15m

        Research Cloud Computing Ecosystem in Armenia

        Abstract
        Growing needs for computational resources, data storage within higher-educational institutions and the requirement for a lot of investment and financial resources the idea or the concept of “National Research Cloud Platform (NRCP)” is crucial to provide necessary IT support for educational, research and development activities, which allow access to advanced IT infrastructure, data centers, and applications and protect sensitive information. In this article we will illustrate the concept of NRCP, background, deployment stages and architecture and finally some use cases.

        Keywords
        IaaS, NRCP, Openstack, ArmCloud, ArmCluster, ArmGrid, Earth science, Life Science, VM

        1.Introduction

        Virtualization transforms the IT industry landscape providing capabilities to run various virtual machines (VM) in the same hardware capacities, enhancing resource sharing and improving performance [1]. Low overhead costs for implementing this technology, high and constantly growing demand for computing resources and the need to provide more flexible services have led to the transition from the use of bare-metal servers and towards providing of virtualized resources (virtual machines, storage and even network infrastructures) that are easier to scale and provide a sufficient level of reliability. On the other hand, the Cloud computing environment has proven to be the base of these changes, which increased the Cloud services and computing resources requirement throughout scientific institutions and universities [2]. The term started use by Amazon Company in 2008.
        Later, this novel technology was developed and provided as a service by GAFA (Google, Apple, Facebook, and Amazon) and other public cloud providers [3]. The main approach of Public cloud providers is to deliver on-demand services through the Internet to anyone who registers and pays for the services. Instead of public clouds, the private cloud infrastructures built-in for a couple of institutions or companies to host the facilities on their side [4]. For instance, national research cloud platforms provide cloud services to the academic and research community on the top of the research and education networks. It is possible to combine public and private cloud deployment models to create a synergy, calling hybrid cloud. Usually, the resources of Public clouds are supported the elasticity of computational resources in case of the need.
        Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) are the leading cloud computing service layers [5]. IaaS provides the infrastructure such as VMs and other resources like VM disk image library, block and file-based storage, firewalls, load balancers, IP addresses, or virtual local area networks. IaaS is the basic layer in cloud computing models widely used by users, like Amazon Web Services (AWS) Elastic Compute Cloud (EC2) and Secure Storage Service (S3) [6]. PaaS or platform as a service model delivers computing platforms typically including an operating system (OS), programming language execution environment, database, web server. Technically it is a layer on top of IaaS providing a platform for building applications, like Microsoft Azure, to build, test, deploy, and manage applications and services through Microsoft-managed data centers [7]. SaaS provides a new service delivery model to access application services over the web without worrying about installing, maintaining, or coding the software. The SaaS provider manages the software maintenance or setup. Therefore, the software is available to access and operate without downloading or installing any piece of design or OS.
        In addition to the leading cloud computing service layers, a wide range of services can be provided by Cloud, with an extra layer of flexibility and scalability, such as provisioning high-demand virtual high-performance computational (HPC) resources [8]. The critical challenge of deploying such cloud services is the complexity and the cost to purchase and maintain the computing resources needing a lot of human efforts to keep all the services up to date and reliable. The cost is a significant limitation for developing countries, like in the case of Armenia. In 2018, the Institute for Informatics and Automation Problems of the National Academy of Sciences of the Republic of Armenia (IIAP) started to realize the “National Research Cloud Platform (NRCP)” initiative. NRCP aims to deliver on-demand cost-effective Cloud computing resources and services to the local institutions and research communities.
        The market analyzes with scientific communities and stakeholders aimed to find out the demands and complexity of scientific problems facing to solve, and gather the information related to the communities’ tools and packages. As a result, IIAP deployed user-oriented Cloud services to fulfill almost all types of demands in Armenia, ranging from general to domain-specific services. The rest of the article is organized as follows: Section 2 represents the architecture and design of NRCP. Section 3 represents prerequisites of some cloud services getting benefits from the infrastructure, while conclusions and discussions on future work follow in Section 4.

        2.National Research Cloud Platform

        In a first stage, a federated cloud infrastructure in the Black Sea Region has been deployed, enabling user communities from participant countries (Armenia, Georgia, Moldova, and Romania) to join the local virtualized resources providing them with VMs, networks, and storages [9]. The federated infrastructure offers user communities to use local or remote resources and makes user communities’ regional collaboration easier. The federated cloud platform based on OpenNebula middleware address the regional problems that require large amounts of computational resources, even when the actual simulations happen not in the zone where the data is stored. In the next stage, the virtualization is widely implemented for the core services of the Armenian National Grid (ArmGrid) infrastructure providing on-demand access to a sustainable computing platform [10]. The ArmGrid infrastructure consists of seven Grid sites located in the leading research centers and universities of Armenia. The total number of processors at the Grid sites was approximately 450 CPU cores. Instead of a single system ArmCluster (Armenian Cluster), ArmGrid is an autonomous decentralized system with distributed job management and scheduling opportunities [11]. Finally, a hybrid research computing platform has been deployed combing HPC with Grid and Cloud Computing based on ArmCluster HPC cluster, resource sharing ArmGrid Grid, and on-demand service provisioning federated cloud infrastructures. Each infrastructure identifies rules for making up the resources and executing applications, like resource ownership and sizing, application portability, or resource allocation policy.
        Based, on these experimental infrastructures, a NRCP is suggested in Armenia aims to have better hardware utilization, increase the storage systems reliability and services management, and offer higher services with the virtualization support. The infrastructure provides VMs and networking services, consists of a cloud core service and a scheduler, application programming interfaces, databases, and nodes where VMs are running. The full virtualization on the kernel-based VM (KVM) hypervisor for each computational node has been implemented [12].
        NRCP consists of three different Zones of Cloud resources, graphics processing unit (GPU) resources, and a data lake (see fig. 1). Combining these three solutions under a single umbrella provides domains specific services with high-availability and scalable services. The NRCP is a critical element of the Armenian e-infrastructure [9], a complex national IT infrastructure consisting of both communication and distributed computing infrastructures. Most importantly, all the input and output data reside on the NRCP side to reduce the time to process the data and possibly is shared data between different scientific groups.

        Figure 1: National Research Cloud Platform
        NRCP architecture design is mainly built on multiple Cloud controllers dedicated to different scientific communities to split the Cloud resources and the Cloud storage based on several scientific domains. The technical information of computational resources is generalized in Table 1.
        Table 1
        NRCP technical specification
        Server type Quantity CPU/GPU model Server parameters Total cores
        CPU/GPUs Cores RAM (GB)
        Thin 4 Intel Xeon E5-2630 v4 2 20 256 80
        Fat 2 Intel Xeon Gold 6138 4 80 512 160
        Accelerated 2 Intel Core i9-10900KF 1 10 128 20
        2 Intel Xeon E5-2680 v3 2 24 128 48
        Intel Xeon Phi 7120P 2 122 244
        2 Intel Xeon Gold 5218 2 32 192 64
        Nvidia V100 32GB 2 10240 20480
        Total (cores) 21096

        Therefore, NRCP provide compute services consist of 616 physical and 20480 GPU cores, about 3 terabytes of memory and 1620 terabytes for data storage (see Table 2).

        Table 2
        The breakdown of storage facilities
        Brand Model Type Quantity Raw capacity (TB) Total capacity (TB)
        HPE MSA 2052 All-flash 2 8 16
        NetApp E2824 Hybrid 1 12 12
        NetApp E5760 Hybrid 2 720 1440
        QNAP TS-809U-RP NAS 1 12 12
        Supermicro JBOD Enclosure NAS 1 40 40
        HPE MSL 2024 Tape (cold) 1 100 100
        Total (TB) 1620

        For instance, all Earth science production groups are consolidated under a single Zone with the same storage node, enabling them to share data if needed very easily and opens the door to a better collaboration possibility.

        2.1 OpenStack IaaS

        The OpenStack open-source cloud computing platform provision IaaS in private and public clouds, like AWS, supporting several hypervisors, load balancing, migration, and other features [10]. The OpenStack has been deployed and customized for NRCP providing orchestration needed to virtualize servers, storage, and networking. The current deployment is based on the OpenStack Rocky release using Centos 7 Linux distribution on all servers. In general, the deployment is automated using Puppet and Linux Bash scripts to simplify adding more Zones in the future [11]. The Controller, compute, network, and storage components have been used for the deployment (see fig. 2).

        Figure 2: NRCP Opentack Cloud platform architecture

        Based on the user name and password authentication, the open-source Horizon dashboard gives administrators an overview of the cloud environment, including resources and instance pools. The Compute (Nova) provides on-demand computing resources by provisioning and managing VMs. Various flavors of VMs ranging from small instances such as 2 CPU and 2 GB RAM with 40 GB HDD, to a very huge instance with 128 CPU cores, 256 GB RAM and 1 TB HDD are provided. The distribution on all instances is quite diverse, including Ubuntu (18.04, 20.04), Debian (10,11,12), Centos (7,8). Networking (Neutron) OSs contains an IP address pool, including floating IP assignment via dynamic host configuration protocol, load balancing, firewall, or virtual private network.

        2.2 Data lake store

        The Data Lake provides a scalable and secure platform that allows all users to upload and download their data with a high-speed, process the data in real-time, use the data for different simulations, share the data between groups.
        For instance, in the domain of astrophysics, the infrastructure’s core is the Armenian Virtual Observatory (VO) repository providing an advanced experimental platform for data archiving, extraction, acquisition, correlation, reduction, and use. The Armenian VO has been ported to the distributed computing infrastructure, as a critical tool in the analysis of the vast amounts of data provided by the surveys, such as the Digitized First Byurakan Survey survey, as the largest and the first systematic objective prism survey of the extragalactic sky [12]. The survey consists of 1874 photographic plates containing about forty million low-dispersion spectra and 20 million objects covering 17,000 square degrees.
        Another example is the Armenian Data Cube [13], a complete and up-to-date EO (Earth Observation) archive data (e.g., Landsat, Sentinel). EO, using precise and reliable data, is a critical element to address different environmental challenges, like water, soil, or plants. The Armenian Data Cube contains three years (2016-2019) of Landsat 7-8 and Sentinel 2-5P imagery analysis-ready data over Armenia. The full coverage of Armenia includes 11 Sentinel-2 and 9 Landsat 7-8 scenes. Because satellite images are often voluminous, gathering and processing these large files typically need HPC computational resources.
        The Cloud storage facility is mounted on-demand on the VMs by special quotas to run the simulation smoothly using the storage data. The data inside the storage is replicated to keep it safe from any end-users’ errors.

        2.3 Accelerated Computing and Deep learning

        Dedicated servers with V1000 Tesla cards and Docker containers have been deployed with the following machine learning and deep learning tools to conduct the experiments quickly:
        • Python: a popular language with high-quality machine learning and data analysis libraries.
        • R: a language for statistical computing and graphics.
        • Pandas: a Python data analysis library enhancing analytics and modeling.
        • Jupyter Notebook: a free web application for interactive computing, enabling users to develop and execute code, and to create and share documents with a live code.
        A dedicated storage volume is mounted on the container, enabling the user to use the Cloud storage data for the experiments based on the requirement.
        In general, the system allows large datasets to ingest and manage to train algorithms efficiently. It will enable deep learning models to scale efficiently and at lower costs using GPU processing power. By leveraging distributed networks, deep learning on the cloud allows you to design, develop and train deep learning applications faster.
        The users usually use SSH keys to access to the predefined VMs containing all the necessary tools, libraries and packages. The approach decreases the number of users that access the Horizon dashboard.
        In general, everything is consolidated and harmonized on the controllers level in each Zone. The GPU zone is a dedicated environment where GPU usage is mandatory to increase the effectiveness of any scientific experiments such as biology, machine learning etc. This part is not consolidated under the Openstack umbrella rather we providing Docker containers for the users where the user can access the container and run the tasks directly because all necessary packages and tools are installed already inside the container.

        2.4 Monitoring

        Cloud monitoring is a critical enabler for providers and consumers to manage and control hardware and software infrastructures by providing information and key performance indicators, such as workload performance, quality of service or service level agreement.
        Prometheus monitored all the resources that record real-time metrics in a time series database (allowing for high dimensionality) built using a HTTP pull model, with flexible queries and real-time alerting. This data is sent to Grafan`s as well to have a complete overview about the Cloud resources usage, increasing the usage matrices to the maximum level, as soon as any inactivity recognized with a resource, we contact the user to confirm the necessity of the needed resource to deliver the resources to other users on the system.

        3.Scientific Communities

        For the last three years NRCP serves multiple scientific projects and communities. The graph below represents different subjects or domain specific areas served by our solution. The article highlights only the earth science and life science communities.

        Figure 3: NRCP scientific communities

        3.1 Earth Science Community

        The earth science user community addresses several critical societal challenges, such as weather prediction, air quality monitoring and prediction, water quality and quantity monitoring, or earth observation. GRASS geographic information system (GIS), quantum GIS and Data Cube codes and tools used for remote sensing processing for vector and raster geospatial data management, geoprocessing, spatial modelling and visualization. Several domain specific services have been developed using large scale simulations, which are transformed to the SaaS HPC solutions, including:
        • Shoreline changes monitoring service using single-band and multi-band methods, based on water object identification and shoreline delineation multi-band methods, provides visible water indicators and the surrounding environment changes, accessible through Jupyter notebook [14]. The service identifies the location and changes over time of shoreline using remote sensing. As a case study, we validated the Lake Sevan, giving sufficiently reliable results.
        • Normalized Difference Vegetation Index (NDVI) time series geoprocessing web service monitors the plant’s state as a greenness biomes [14]. NDVI time series analysis uses HPC resources to process a large number of high-resolution multispectral satellite images. The service hides the difficulties of dealing with geoprocessing processes and avoiding the time needed to search, collect, and upload input data sets. The service can quickly comprise the NDVI time series simulations with the available spatial and temporal environmental field data sets. Thirteen vegetation indices have been studied to find an optimal parallelization approach for our infrastructure.
        • Regional-scale weather forecasting service serves operational forecasting and atmospheric research needs using different weather prediction models and parameterizations [16]. The service’s core is the next-generation mesoscale simulation Advanced Weather Research and Forecasting (WRF-ARW) modeling system running initial and boundary conditions derived from Global Forecast System analysis and forecasts with 0.25 deg [17].The service runs for the same region many times with various initial conditions. The service addresses many environmental and weather challenges, like predicting high temperatures in the south region of Armenia or analysis of wintertime cold-air pool.
        • Hydrological modeling "Desktop as a Service (DaaS)" service studies, predicts, and manages water resources [18]. Hydrological models paired with meteorological models allows us to carry out long-term simulation of large watersheds using coarse spatial and temporal resolution. The service’s core is the river basin scale Soil and Water Assessment Tool (SWAT) model, consisting of sensitivity analysis, the calibration, and validation parts [23]. The service simulation is CPU intensive and uses many input data in terms of temperature, wind speed, precipitation, land use, soil data, and a digital elevation model for running the model. The most sensitive parameters to carry out the calibration, such as runoff processes and baseflow recession coefficient have been studied and selected. As a case study, the service was validated for the Sotk watershed of Lake Sevan to assess and study the feasibility watershed modeling in that region.

        3.2 Life Science Community

        Modern life sciences research depends on the traditional HPC systems and data analytics and
        managing massive datasets. HPC has a vital role in genomic analysis to process a large amount of data (for instance, next-generation sequencing) or biomolecular systems to carry out Molecular Dynamics (MD) or molecular mechanics simulations. Our aim to transform services into SaaS solutions and optimized infrastructures by adopting advanced technology, including GPU computing and machine learning capabilities for molecular modeling, molecular biology, statistical analysis and bioinformatics, increasingly large biomolecular systems, and statistical analysis and bioinformatics. For instance, the Modeling and MD study service for complex systems based on the classic treatment of interaction among atoms offers a detailed picture of the structure and dynamics in the multicomponent system, which is of particular interest improving our knowledge and understanding of biological and chemical processes [19]. NAMD and GROMACS MD packages are customized for HPC simulation of large biomolecular systems, such as proteins, lipids, or nucleic acids.

        4.Conclusion and lessons learned

        The article summarizes the experiences gained so far, and highlighted a few scientific use cases, where the community is intensively using NRCP resources. Throughout the deployment and the implementation phases of NRCP deployment for diverse scientific communities with specific domain-oriented approaches, a list of recommendation has been collected:
        • To consider the complete infrastructure and its capabilities before the deployment, boosting to choose the best possible options and tools satisfying the needs.
        • To conduct benchmarks and experiments before putting the solution into production; to confirm the systems' reliability by handling different scenarios, even if the deployment of some packages needs to be done several times.
        • To prepare a well-documented tutorial with exact details of all services and solutions, considering that not every user has an IT background when using the system.
        • To conduct a training campaign with potential communities and explain the opportunities and challenges. It will help to understand the benefits of such solutions by boosting scientific experiments and simulations.
        • To minimize the manual deployment as much as possible. For instance, it is planned to implement multiple bash scripts and Puppet automation.
        • To maximize the overall resource usage in computing, networking, and storage resources considering the energy consumption minimization.
        • To use federated identity authentication based on SAML 2.0 (Security Assertion Markup Language) to make easy user access [25].

        It is planned to develop and provide user-specific high-level services, like SaaS solutions for all those communities, conducting the experiments without accessing the computing resources. Instead, the communities may use the browser to access any domain-specific service and run the experiment from it, further simplifying cloud resource usage. The OpenStack Ironic will be implemented for economic and most efficient use of computing resources focusing on HPC Cloud solutions provisioning based on completely virtual, bar-metal, and hybrid architectures. The future ultimate goal is the establishment of a National Open Science Cloud Initiative and its further integration with the European Open Science Cloud and European Research Infrastructures, like the European Life-science Infrastructure for Biological Information or European Open Science Infrastructure [26].

        5.Acknowledgement

        This paper is supported by the European Union’s Horizon 2020 research infrastructures programme under grant agreement No 857645, project NI4OS Europe (National Initiatives for Open Science in Europe), and the State Committee of Science of Armenia under the State Target Project “Deployment of a cloud infrastructure to tackle scientific-applied problems”.

        6.References

        [1] M.F. Mergen, V. Uhlig, O. Krieger, J. Xenidis. ”Virtualization for high-performance computing.” ACM SIGOPS Operating Systems Review. 2006 Apr 1;40(2):8-11.
        [2] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, M. Zaharia, A view of cloud computing, Communications of the ACM 53.4 (2010) 50-58. doi:10.1145/1721654.1721672
        [3] V. Chang, G. Wills, D. De Roure. ”A review of cloud business models and sustainability.” In2010 IEEE 3rd International Conference on Cloud Computing 2010 Jul 5 (pp. 43-50). IEEE.
        [4] Y. Jadeja, K. Modi. ”Cloud computing-concepts, architecture and challenges.” In2012 International Conference on Computing, Electronics and Electrical Technologies (ICCEET) 2012 Mar 21 (pp. 877-880). IEEE.
        [5] S.K. Sowmya, P. Deepika, J. Naren, Layers of Cloud–IaaS, PaaS and SaaS: A Survey, International Journal of Computer Science and Information Technologies 5.3 (2014) 4477-4480.
        [6] J. Murty, Programming amazon web services: S3, EC2, SQS, FPS, and SimpleDB. " O'Reilly Media, Inc."; 2008 Mar 25.
        [7] L. Qian, Z. Luo, Y. Du , L. Guo. (2009) Cloud Computing: An Overview. In: Jaatun M.G., Zhao G., Rong C. (eds) Cloud Computing. CloudCom 2009. Lecture Notes in Computer Science, vol 5931. Springer, Berlin, Heidelberg.
        [8] R.R. Expósito, G.L. Taboada, S. Ramos, J. Touriño, R. Doallo, Performance analysis of HPC applications in the cloud, Future Generation Computer Systems 29.1 (2013) 218-229. doi: 10.1016/j.future.2012.06.009
        [9] H Astsatryan, A Hayrapetyan, W Narsisian, V Sahakyan, Yu Shoukourian, G Neagu and A Stanciu. Environmental science federated cloud platform in the BSEC region, International Journal of Scientific & Engineering Research 1.1 (2014) 1130–1133.
        [10] Hrachya Astsatryan, Yuri Shoukouryan and Vladimir Sahakyan. ”Grid activities in Armenia.” In Proceedings of the International Conference Parallel Computing Technologies (PAVT’2009). Novgorod, Russia, March, 2009.
        [11] H.V. Astsatryan, Yu Shoukourian and V. Sahakyan. ”Creation of High-Performance Computation Cluster and DataBases in Armenia.” In Proceedings of the Second International Conference on Parallel Computations and Control Problems (PACO ‘2004), pages 466–470, 2004.
        [12] Y. Yamato, OpenStack hypervisor, container and baremetal servers performance comparison, IEICE Communications Express 4.7 (2015) 228-232. doi: 10.1587/comex.4.228.
        [13] Hrachya Astsatryan, Vladimir Sahakyan, Yuri Shoukourian, Pierre-Henri Cros, Michel Dayde, Jack Dongarra, Per Oster. ”Strengthening Compute and Data intensive Capacities of Armenia." IEEE Proceedings of 14th RoEduNet International Conference - Networking in Education and Research (NER'2015), Craiova, Romania, pp. 28-33, September 24-26 2015, DOI: 10.1109/RoEduNet.2015.7311823.
        [14] O. Sefraoui, M. Aissaoui, M. Eleuldj, OpenStack: toward an open-source solution for cloud computing, International Journal of Computer Applications 53.3 (2012) 38-42. doi: 10.5120/8738-2991.
        [15] J. Loope, Managing infrastructure with puppet: configuration management at scale. " O'Reilly Media, Inc."; 2011 Jun 9.
        [16] A.M. Mickaelian, H.V. Astsatryan, A.V. Knyazyan, T. Yu. Magakian, G.A. Mikayelyan, L.K. Erastova, L.R. Hovhannisyan, L.A. Sargsyan and P.K. Sinamyan. Ten Years of the Armenian Virtual Observatory. ASPC, vol. 505, no. 16, 2016
        [17] Sh. Asmaryan, A. Saghatelyan, H. Astsatryan, L. Bigagli, P. Mazzetti, S. Nativi, Y. Guigoz, P. Lacroix, G. Giuliani and N. Ray. Leading the way toward an environmental National Spatial Data Infrastructure in Armenia. South-Eastern European Journal Issue of Earth Observation and Geomatics 3 (2014) 53–62.
        [18] Shushanik Asmaryan, Vahagn Muradyan, Garegin Tepanosyan, Azatuhi Hovsepyan, Armen Saghatelyan, Hrachya Astsatryan, Hayk Grigoryan, Rita Abrahamyan, Yaniss Guigoz and Gregory Giuliani, Paving the way towards an armenian data cube, Data 4.3 (2019) 1–10. doi: 10.3390/data4030117.
        [19] Hrachya Astsatryan, Andranik Hayrapetyan, Wahi Narsisian, Shushanik Asmaryan, Armen Saghatelyan, Vahagn Muradyan, Gregory Giuliani, Yaniss Guigoz and Nicolas Ray, An interoperable cloud-based scientific GATEWAY for NDVI time series analysis, Elsevier Computer Standards & Interfaces 41 (2015). doi: 10.1016/j.csi.2015.02.001.
        [20] H. Astsatryan, A. Shakhnazaryan, V. Sahakyan, Yu. Shoukourian, V. Kotroni, Z. Petrosyan, R. Abrahamyan and H. Melkonyan. ”WRF-ARW Model for Prediction of High Temperatures in South and South East Regions of Armenia.” In IEEE 11th International Conference on e-Science, pages 207–213. IEEE, 2015.
        [21] Michael C Coniglio, James Correia Jr, Patrick T Marsh and Fanyou Kong, Verification of convection-allowing WRF model forecasts of the planetary boundary layer using sounding observations, Weather and Forecasting 28.3 (2013) 842–862. doi: 10.1175/WAF-D-12-00103.1.
        [22] H. Astsatryan, W. Narsisian and Sh. Asmaryan, SWAT hydrological model as a DaaS cloud service, Springer Earth Science Informatics 9.3 (2016) 401–407. doi: 10.1007/s12145-016-0254-6.
        [23] Arnold JG, Moriasi DN, Gassman PW, Abbaspour KC, White MJ, Srinivasan R, Santhi C, Harmel RD, Van Griensven A, Van Liew MW, Kannan N. SWAT: Model use, calibration, and validation, Transactions of the ASABE 55.4 (2012) 1491-1508. doi: 10.13031/2013.42256.
        [24] Armen Poghosyan, Levon Arsenyan and Hrachya Astsatryan, Dynamic Features of Complex Systems: A Molecular Simulation Study, Springer High-Performance Computing Infrastructure for South East Europe’s Research Communities, pages 117–121, 2014.
        [25] D.W. Chadwick, K. Siu, C. Lee, Y. Fouillat, Germonville D, Adding federated identity management to openstack, Journal of Grid Computing 12.1 (2014) 3-27. doi: 10.1007/s10723-013-9283-2.
        [26] P. Budroni, J. Claude-Burgelman, M. Schouppe, Architectures of knowledge: the European open science cloud, ABI Technik 39.2 (2019) 130-41. doi: 10.1515/abitech-2019-2006.

        Speaker: Mr Artashes Mirzoyan (Institute for Informatics and Automation Problems of the National Academy of Sciences of the Republic of Armenia)
      • 16:15
        COMPASS production system: Frontera experience 15m

        Since 2019, the COMPASS experiment works on the Frontera high performance computer. This is a large machine (number 5 in the ranking of the most powerful supercomputers in 2019) and details, problems, and approaches to organizing data processing on this machine are presented in this report.

        Speaker: Artem Petrosyan (JINR)
      • 16:30
        Concurrently employing resources of several supercomputers with ParaSCIP solver by Everest platform 15m

        ParaSCIP is one of the few open-source solvers implementing a parallel version of the Branch-and-Bound (BNB) algorithm for discrete and global optimization problems adapted for computing systems with distributed memory, e.g. for clusters and supercomputers. As is known from publications there were successful using up to 80,000 CPU cores during solving problems from the MIPLIB test libraries. It was on Titan supercomputer from Oak Ridge National Laboratory, USA. During operation, the solver periodically saves the current state of the solution process (so-called checkpoints). This allows resuming solving process later, on another supercomputer as well. Usually, this feature is used to bypass time limits for jobs sent to the cluster. But still, there are a lot of interesting scientific and/or industry problems which can not be solved in an acceptable time by one “usual” cluster by hundreds of CPU cores.
        In this study, an approach is described to use resources of several clusters simultaneously to reduce solving time. For that, a previously developed DDBNB (Domain Decomposition for BNB) toolkit is used, which allows to speed up the solution process by coarse-grained parallelization based on a prior decomposition of the feasible domain of the problem to be solved. DDBNB is available as an application of the Everest distributed computing platform which is responsible for running jobs on heterogeneous computing resources (servers, cloud instances, clusters, etc.). DDBNB, Everest, and ParaSCIP had been modified to enable exchange of incumbents (feasible solutions found by the BNB-solver) between several ParaSCIP instances running on different supercomputers.
        The resulting system was benchmarked using three traveling salesman problem instances of different sizes. The supercomputers HPC5 of the NRC “Kurchatov Institute” and cHARISMa of the HSE University were used as computing resources. As a result, there is an effect for two instances, and the speedup is especially noticeable for a more complex problem. However, for a simpler problem, the exchange of incumbents does not seem to affect the amount of speedup. For the third instance, there is no particular effect, at least no slowdown is observed.
        This work is supported by the Russian Science Foundation (Project 20-07-00701).

        Speaker: Sergey Smirnov (Institute for Information Transmission Problems of the Russian Academy of Sciences)
      • 16:45
        Опыт организации гибкого доступа к удаленным вычислительным ресурсам из среды JupyterLab с использованием технологий проектов Everest и Templet 15m

        Развитие технологий искусственного интеллекта и больших данных (big data) явилось стимулом разработки новых инструментальных средств организации и автоматизации рабочих процессов (workflow). Проект Jupyter – один из основных проектов автоматизации рабочих процессов в области искусственного интеллекта. Ключевыми парадигмами проекта являются клиент-серверная модель и графическая интерактивная среда в стиле REPL (read-eval-print loop), реализованные в форме web-интерфейса. Использование многооконного интерфейса и возможность работы с несколькими языками программирования (в дополнение к языку Python) в новой реализации JupyterLab позволяют расширить применение технологий проекта Jupyter для решения широкого круга прикладных и учебных задач компьютерного моделирования. Однако при решении таких задач необходим не только развитый пользовательский интерфейс, но и средства организации высокопроизводительных вычислений. Обычным методом организации вычислений в среде JupyterLab является размещение её серверной компоненты непосредственно на вычислительном ресурсе. Но этот метод не применим в случае параллельной работы одного приложения JupyterLab с несколькими вычислительными ресурсами или отсутствия технической возможности развертывания серверной компоненты Jupyter на компьютере, где будут производиться вычисления. Целью данной работы является демонстрация технологии гибкого взаимодействия приложения JupyterLab с разнообразными вычислительными ресурсами, которая решает описанную проблему.

        Предлагаемая технология взаимодействия с вычислительными ресурсами из среды JupyterLab основана на совместном применении платформы распределенных вычислений Everest (everest.distcomp.org), набора средств разработки многозадачных приложений на языке С++ проекта Templet (templet.ssau.ru) и jupyter-ноутбуков со сценариями рабочих процессов для проведения вычислительных экспериментов. Нами реализованы интерактивные рабочие процессы на основе ноутбуков JupyterLab, в которых были задействованы несколько распределенных вычислительных ресурсов. Например, в рабочем процессе анализа хаотического поведения динамической системы с использованием вычисления показателей Ляпунова по алгоритму Бенеттина к интерфейсной части приложения подключались виртуальные машины корпоративного облака Самарского университета под управлением Windows 7 с развернутым пакетом Maple 17. Сервер JupyterLab запускался в публичном сервисе MyBinder.org либо через самостоятельно развернутую службу The Littlest JupyterHub (TLJH). Ноутбуки сценариев рабочих процессов и другой необходимый код загружались на сервер JupyterLab автоматически при сборке docker-образа сервисом MyBinder.org (или с использованием пакета nbgitpuller в TLJH) из git-репозиториев, размещенных на платформе GitHub.org

        В результате нами реализован гибкий доступ к вычислительным ресурсам из среды JupyterLab с возможностью удаленной работы через web-интерфейс; развертыванием сервера JupyterLab отдельно от вычислительных ресурсов; использованием сложных сценариев рабочего процесса, предусматривающих параллельные вычисления на нескольких разнородных вычислительных ресурсах.

        Speaker: Dr Sergey Vostokin (Samara National Research University)
    • 17:00 18:30
      Welcome Party 1h 30m Conference Hall

      Conference Hall

    • 09:00 10:10
      Plenary reports Conference Hall

      Conference Hall

      • 09:00
        Evolution of the WLCG computing infrastructure for the High Luminosity challenge of the LHC at CERN 40m Conference Hall

        Conference Hall

        Speaker: Simone Campana (CERN)
      • 09:40
        Perspective and Strategy of IT Development at IHEP 30m Conference Hall

        Conference Hall

        Speaker: Qiulan Huang (Institute of High Energy Physics, CAS, China)
    • 10:10 10:30
      Coffee + Group Photo 20m
    • 10:30 12:40
      Plenary reports Conference Hall

      Conference Hall

      • 10:30
        Status of the DIRAC Interware Project 45m

        DIRAC Interware is a development framework and a set of ready to use components that allow to build distributed

        computing systems of any complexity. Services based on the DIRAC Interware are used by several large scientific

        collaborations such as LHCb, CTA and others. Multi-community DIRAC services are also provided by a number of

        grid infrastructure projects, for example EGI, GridPP and JINR. The DIRAC Interware is providing a complete solution

        for user communities for managing their workloads and data as well as high intensity automated workflows. The

        software is continuously evolving in order to accommodate new types of computing and storage resources. New

        technologies in the ensuring security of distributed operations are also incorporated. The development process

        exploits advanced technologies for software certification and continuous integration. In this contribution we will provide

        a review of the current status of the DIRAC Interware Project, ongoing developments and examples of usage.

        Speaker: Andrei Tsaregorodtsev (CPPM-IN2P3-CNRS)
      • 11:15
        dCache: Inter-disciplinary storage system 45m

        The dCache project provides open-source software deployed internationally to satisfy ever more demanding storage requirements. Its multifaceted approach provides an integrated way of supporting different use-cases with the same storage, from high throughput data ingest, data sharing over wide area networks, efficient access from HPC clusters and long term data persistence on a tertiary storage. Though it was originally developed for the HEP experiments, today it is used by various scientific communities, including astrophysics, biomed, life science, which have their specific requirements. In this presentation we will show some of the new requirements as well as demonstrate how dCache developers are addressing them.

        Speaker: Tigran Mkrtchyan (DESY)
      • 12:00
        Intel architecture, technology and products for HPC and GRID. How to create the most effective system. 20m
        Speaker: Nikolay Mester (INTEL)
      • 12:20
        Disaggregated infrastructure: future trend and current implementation in MLIT JINR Govorun system. 20m
        Speaker: Alexander Moskovsky
    • 12:40 13:30
      Lunch 50m
    • 13:30 15:00
      Big data Analytics and Machine learning. 407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      • 13:30
        Event index based correlation analysis for the JUNO experiment 15m

        The Jiangmen Underground Neutrino Observatory (JUNO) experiment is mainly designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters by detecting reactor anti-neutrinos. The total event rate from DAQ is about 1 kHz and the estimated volume of raw data is about 2 PB/year. But the event rate of reactor anti-neutrino is only about 60/day. So one of challenges for data analysis is to select sparse physics signal events in a very large amount of data, whose volume can not be reduced by using the traditional data streaming method. In order to improve the speed of data analysis, a new correlated data analysis method has been implemented based on event’s index data. The index data contain the address of events in the original data files as well as all the information needed by event selection, which are produced in event pre-processing using the JUNO’s Sniper-based offline software. The index data are subsequently selected by using refined selection criteria with Spark so that the volume of index data is further reduced. At the final stage of data analysis, only the events within the time window are loaded according to the event address in the index data. A performance study shows that this method achieves a 15-fold speedup compared to correlation analysis by reading all the events. This contribution will introduce detailed software design for event index based correlation analysis and present performance measured with a prototype system.

        Speaker: Tao Lin (IHEP)
      • 13:45
        Mechanisms for identifying the patterns of the dynamics of scientific and technical publications on the example of the thematic direction "Robotics" 15m

        Introduction
        In world practice, the number of published articles in leading scientific publications is indicators of the results of scientific activities of researchers, research organizations and higher educational institutions. International publication activity reflects the level of development of national science against the background of other countries, especially in the field of basic research, where there can be no results other than publications by definition. In developed countries, to track and analyze the dynamics of scientific information flows, information and analytical systems have been developed that aggregate scientific publications. Currently, the most famous systems of this type are Web of Science (WoS) and Scopus.
        Web of Science is a collection of diverse databases collected on the ISI Web of Knowledge platform and processed by the US Institute of Scientific Information. WoS provides researchers and professionals with information on all branches of knowledge among more than 12 000 journals and 120 000 conference materials, more than 4 400 sites. Access to the bases WoS licensed and is provided on a paid basis to universities, institutes, scientific organizations and individuals. Subscription payment allows you to access full-text versions of materials and perform various types of searches on the database. WoS includes three interrelated databases: Science Citation Index Expanded; Social Sciences Citation Index; Arts and Humanities Citation Index. The platform has built-in capabilities for searching, analyzing and managing bibliographic information. The depth of the Web of Science archive has been since 1970.
        Scopus is a system of bibliographic databases related to various fields of science. Scopus is owned and operated by Elsevier, a Dutch publishing company. Scopus indexes more than 20 thousand scientific publications on technical, medical and humanities, owned by 5 thousand publishers. Scopus also indexes conference materials and serial book publications. As of early 2010, Scopus includes over 38 million scientific publication records, including over 19 million resource records published since 1996 with cited bibliography lists. The Scopus database is available as a paid subscription via a web interface and consists of four basic subject areas: Life Sciences, Health Sciences, Physical Sciences, Social Sciences & Humanities.
        Both databases are widely used in many countries of the world to assess the effectiveness of both individual researchers and scientific teams and institutions. Thus, using the built-in tools of information and analytical systems, it is possible to analyze the publication activity in any area of research of interest.
        1. Rationale for selecting a study topic
        The idea of creating a mechanical device similar to humans or other living beings, both in appearance and in actions, has been an area of interest for humanity since time immemorial. The main motive for this interest was the desire to facilitate human labor, simplify the study of the surrounding world and provide protection from the enemy. In connection with the development of technological knowledge of society and the complexity of production, the emergence and development of a variety of controlled machines inevitably led to the formation of a new scientific direction – robotics. Robotics is an applied science based on cybernetics, bionics and mechanics, engaged in the development of automated technical systems based on electronics and programming. Robotics studies both the theory, methods of calculating and designing robots, their systems and elements, and the problems of complex automation of production and scientific research using robots.
        When creating the first robots up to the present day, human capabilities serve as a model for them. It was the desire to replace a person in hard and dangerous work that gave rise to the idea of creating a robot. Robots owe their advent, in particular, to computerization of production, automation of technological processes, as well as the vast experience gained in the operation of machining machines with numerical program control.
        The choice of robotics, as the subject of this study, is not accidental, because it is a promising, high-tech, dynamically developing sphere that is located at the junction of related industries. Many of the world's leading powers are engaged in research and development in this field. The scope of robotics is quite extensive. These can be simple robots that execute simple commands, and there can be complex robots that execute a whole set of algorithms. Each robot or program consists of a huge number of components, and this requires a certain precision in the development and high qualifications and competence of the developers of robotic systems. In addition, in the area under consideration there are many involved objects that participate in the creation of robots. And finally, there is a huge variety of information about the robotic field, which allows you to systematize the accumulated experience and analyze trends of interest.
        2. Express analysis of the thematic area "Robotics"
        The present study to assess the cyclicality of publications was conducted on the basis of the Web of Science search platform. To analyze publication activity in the WoS system, the following search query ("extended search") was compiled "by category WoS" - WC: "WC = Robotics" with a filter by year "1970-2019."
        The search results page presents 168,562 publications and conference materials as of the second quarter of 2019. The first publication in the «Robotics» category appeared in the Web of Science archive in 1989. This year can be considered the beginning of serious publication activity in this area.
        We will analyze the current state of development of this area based on data that can be obtained from the abstract database WoS. Figure 1 shows a graph of the total number of publications on a given topic from 1995 to 2018.

        Figure 1. The number of publications on the topic "Robotics" from 1995 to 2018
        Green indicates the years when there is an increasing interest in robotics. The decline in publication activity and the stationary use of technologies are shown in orange. Years of maximum publication activity are highlighted in red. As you can see, from 1989 to 1999. the world community conducted some research and conferences, published articles, but the jump in the development of interest in robotics and the beginning of active publication activities in the world occurred in 1999-2000. and amounted to almost 2 500 publications.
        In the period from 2000 to 2008 there was a continuous growth, the latest solutions in technology were introduced. Growth peaked in 2008 at 11 418 publications. Further, from 2009 to 2011, some developments were made on the basis of published documents and from 2011 to 2015 there was a new jump in interest and, accordingly, the development of robotics, the number of publications reached almost 7 000. After the peak of growth, as in 2008, publication activity began to decline, which means that since 2015 active development of "tried-and-tested" technologies has been underway.
        Figure 2 is a diagram of the relationship of document types published in the archive WoS. The largest number of documents published are conference materials and articles: 127 016 documents and 38 437 documents, respectively. Consider these directions separately below.

        Figure 2. The ratio of the number of all publications on the topic "Robotics"
        Figure 3 shows the distribution of the number of documents by country. As you can see in the chart, the United States of America is the leader in the total number of published documents (30 971 documents). They are followed by the People's Republic of China (27 624) and Japan (20 783). It can be concluded that the study of the topic of "Robotics" is most actively engaged in these three countries.

        Figure 3. Distribution of publications by country
        2.1. Research of conference materials on the topic "Robotics"
        Let's analyze the current state of documents on thematic conferences based on the data from the WoS abstract database. To do this, we investigate the publication activity by the authors of thematic conferences from 1989 to 2019. Figure 4 presents data on the number of worldwide publications for the specified period.
        From 1989 to March 2019, 127,016 documents of this type were published. According to the diagrams in Figure 4, it can be seen that interest in robotics in the world was manifested in 2000, when the number of publications increased from 95 to 2691. The largest number of documents published after thematic conferences (13 274) is in 2015 year. The smallest number of works (9) is in 1989 and 1991.

        Figure 4. Publications in thematic conferences in 1989-2018
        In total, 3 725 conferences were held in the direction of Robotics. Table 1 shows the names of robotic conferences and the number of publications (according to the principle of more than 700 published documents) based on the results of the conferences held during the entire existence of the section of interest to us in WoS.
        Table 1. Conferences and the number of publications following them
        № Conference name Number of documents
        1 IEEE RSJ INTERNATIONAL CONFERENCE ON
        INTELLIGENT ROBOTS AND SYSTEMS IROS 10745
        2 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
        AUTOMATION ICRA 8531
        3 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
        AUTOMATION 3833
        4 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND
        BIOMIMETICS ROBIO 2121
        5 IEEE INTERNATIONAL CONFERENCE ON
        MECHATRONICS AND AUTOMATION 2072
        6 25TH IEEE RSJ INTERNATIONAL CONFERENCE ON
        INTELLIGENT ROBOTS AND SYSTEMS IROS 1938
        7 7TH WORLD CONGRESS ON INTELLIGENT CONTROL AND
        AUTOMATION 1831
        8 SICE ICASE INTERNATIONAL JOINT CONFERENCE 1237
        9 INTERNATIONAL CONFERENCE ON CONTROL
        AUTOMATION AND SYSTEMS 1116
        10 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS MAN
        AND CYBERNETICS 1095
        11 IEEE ASME INTERNATIONAL CONFERENCE ON
        ADVANCED INTELLIGENT MECHATRONICS AIM 976
        12 39TH IEEE CONFERENCE ON DECISION AND CONTROL 955
        13 IEEE INTERNATIONAL SYMPOSIUM ON INDUSTRIAL
        ELECTRONICS 912
        14 IEEE ASME INTERNATIONAL CONFERENCE ON
        ADVANCED INTELLIGENT MECHATRONICS 908
        15 AMERICAN CONTROL CONFERENCE ACC 903
        16 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS MAN
        AND CYBERNETICS SMC 03 821
        17 INTERNATIONAL CONFERENCE ON ARTIFICIAL LIFE
        AND ROBOTICS ICAROB 817
        18 7TH INTERNATIONAL CONFERENCE ON MACHINE
        LEARNING AND CYBERNETICS 750
        19 20TH IEEE INTERNATIONAL CONFERENCE ON ROBOTICS
        AND AUTOMATION ICRA 730
        20 19TH IEEE INTERNATIONAL CONFERENCE ON ROBOTICS
        AND AUTOMATION ICRA 722

        The leading conferences, at the end of which the largest number of documents are published, are conferences supported by the international non-profit association of specialists in the field of engineering, the Institute of Electrical and Electronics Engineers (IEEE). For convenience and clarity, such conferences have been highlighted in green.
        2.2. Research of published articles on the subject of "Robotics"
        Let's analyze the current state of publication of articles on the subject of "Robotics" based on data from the WoS abstract database from 1983 to 2018 (Figure 5). A total of 38 448 documents of this type were published during this period. Based on the data obtained, we can conclude that interest in the topic of "Robotics" is growing from year to year, which is confirmed by the steady increase in the number of published articles. Interestingly, conferences on topics of interest began to be held in 1989 and, accordingly, documents on the results of conferences began to appear at the same time. Articles on the chosen topic began to be published six years earlier, in 1983. The jump in more than 800 published documents is also observed in 2015.

        Figure 5. The number of published articles on the subject of "Robotics" from 1983 to 2018
        3. Development of related areas
        The development of robotics is largely determined by the level of development of related industries: the science of materials, as well as the development of computer technology. Data on the conducted interdisciplinary research are presented in Table 2.
        Table 2. Number of publications in related industries
        № Categories Web of Science Количество публикаций
        1 Computer Science, Artificial Intelligence 75 015
        2 Automated control systems 69 038
        3 Electrical and Electronic Engineering 52 247
        4 Computer Science, Cybernetics 13 171
        5 Computer Science, Theory and Methods 11 224
        6 Mechanical engineering 10 327
        7 Computer science, information systems 9 348
        8 Компьютерные науки, междисциплинарные применения 7 985
        Based on the data of Table 2, it can be concluded that for the entire time that the Robotics topic exists in the WoS, the largest number of documents affect such related topics as Computer Science, Artificial Intelligence (more than 44%), Automated Control Systems (more than 41%) and Electrical and Electronic Engineering (more than 31%). From 2009 to 2018, three blocks of headings can be distinguished:
        1. Sustainable development.
        2. Forward-looking studies.
        3. Bursts.
        The sustainable development block is characterized by positive publication dynamics. The leading topics are: computer science, artificial intelligence (43 180 publications), automated control systems (31 781 publications), electrical and electronic engineering (29 514 publications).
        The block of perspective studies is represented by 20 areas that have a positive dynamics of publications in the period under consideration. The topics of the use of robotics in the social sphere are touched upon - medicine, education.
        The burst block is represented by a single increase in publications, so far it is impossible to assess the development of such interdisciplinary studies. In addition, it was revealed that 43 thematic headings are found once, which indicates the conduct of interdisciplinary studies in these areas.
        In addition to the leading topics (1 300 published documents over 10 years), some areas were selected that are inherently associated with robotics. From the diagram (Figure 6) it can be seen that the jump in the publication of documents precisely on the study of artificial intelligence occurred in 2015. Studies on automation of systems management show a stable number of documents - about 3 000 per year. With regard to electrical and electronic engineering, approximately 2 500 papers are published annually. Figure 6 also shows the distribution for 6 main headings out of 71 presented. It is in the six areas indicated below that the largest number of documents are published.

        Figure 6. Number of publications on related topics from 2009 to 2018

        Сonclusions
        In the world, a steady rapid increase in scientific activity was indicated in the 2000s. Leading positions are occupied by the USA, Japan, China.
        The development of scientific thought does not occur progressively, but, rather, cyclically. Time is needed for research, development of presented concepts and new results. Such a period can reach 5-7 years, after which a sharp jump in publication activity is visible, then it goes on a decline. More progressive technologies are emerging, scientific thought is developing in accordance with the various needs of time. Past developments are being supplanted and replaced by more efficient next-generation technologies. After about 5-7 years, the situation repeats again.
        The thematic area "Robotics" is developing intensively, this is evidenced by the active publication activities of leading countries and countries interested in the development and use of robotics. Interest in this area is growing. Due to the fact that robotics is at the junction of industries, such related areas as artificial intelligence, automated control systems, electrical and electronic engineering, and materials science are actively developing in parallel. Therefore, the further development of robotics will largely be determined, including by the development of related areas.
        Acknowledgements
        The study was carried out at the expense of the Russian Science Foundation grant (project # 19-71-30008).

        Speaker: Mr Andrey Cherkasskiy (National Research Nuclear University MEPhI (Moscow Engineering Physics Institute))
      • 14:00
        Data analysis platform for stream and batch data processing on hybrid computing resources 15m

        The modern Big Data ecosystem provides tools to build a flexible platform for processing data streams and batch datasets. Supporting both the functioning of modern giant particle physics experiments and the services necessary for the work of many individual physics researchers generate and transfer large quantities of semi-structured data. Thus, it is promising to apply cutting-edge technologies to study these data flows and make the services ' provisioning more effective.
        In this work, we describe the structure and implementation of our data analysis platform, built around an Apache Spark cluster. With the official support for GPU computing now available in Spark version 3, we propose a change in architecture to utilize these more performant resources while keeping the platform's functionality provided by using mainstream Big Data software. Furthermore, wanting GPU support necessitated a change of computing resource management infrastructure from Apache Mesos to Kubernetes. Finally, to show the features and operation of the system, we used the task of network packet analysis for security monitoring and anomaly detection in both batch and stream mode.

        Speaker: Ivan Kadochnikov (JINR, PRUE)
      • 14:15
        Intelligent Networks: using convolutional LSTM models to estimate network traffic 15m

        The Large Hadron Collider experiments at CERN produce a large amount of data which are analyzed by the High Energy Physics (HEP) community in hundreds of institutes around the world.
        Both efficient transport and distribution of data across HEP centres are, therefore, crucial.
        The HEP community has thus established high-performance interconnects for data transport---notably the Large Hadron Collider Optical Private Network (LHCOPN) [1] linking the sites with data curation responsibility and LHCONE [1], a global overlay network linking over 150 collaborating sites. Efficient data transport over these networks is managed by the File Transfer Service (FTS) [2] which manages transfers on behalf of the end users.
        Although these networks are well designed---and evolve to meet the changing long-term requirements---short term bottlenecks nevertheless occur. End-user needs would therefore be better met if these networks could be reconfigured dynamically to meet short-term demands, for example by load-sharing across multiple paths or through the temporary commissioning of an additional point-to-point link.
        This work is aimed at detecting link saturation in order to provide a solid basis for formulating sensible network re-configuration plans. We analyse data provided by FTS and use LSTM-based models (CNN-LSTM and Conv-LSTM) to effectively forecast network traffic along different network links.
        While convolutional layers are used to extract correlations across multiple features, LSTMs interpret network data as time sequences so that a combination of CNN and LSTM layers becomes the natural architectural choice.
        Our work shows that CNN-LSTM [3] and Conv-LSTM [4] architectures can indeed detect network saturation and provide great forecasting accuracy even over long time periods (up to 30 minutes). In addition, we provide a detailed performance comparison among different models, high-lighting their strengths and flaws according to the specific task at hand. Our future research will focus on further optimising the DNN architecture, in particular in terms of the relative strength between the CNN and LSTM components in the hybrid models. In addition, a systematic investigation of their capability to generalise to different data sets will not only, improve their usability for this specific application, but also contribute to DNN interpretability and the understanding of the learning process.

        [1] E Martelli and S Stancu 2015 J. Phys.: Conf. Ser. 664 052025
        [2] J. Waczy ́nska, E. Martelli, E. Karavakis, T. Cass,NOTED: a framework to optimizethe network traffic via theanalysis of data set from transfers services as FTS., Paperpresented to vCHEP 2021s (2021)
        [3] X. Song, F. Yang, D. Wang, K. Tsui,Combined CNN-LSTM Network for State-of-Charge Estimation of Lithium-Ion Batteries, IEEE Access7, 88894 (2019)
        [4] H. Zheng, F. Lin, X. Feng, Y. Chen,A hybrid deep learning model with attention-based conv-LSTM networks for short-term traffic flow prediction, IEEE Transactions onIntelligent Transportation Systems (2020)

        Speaker: Ms Joanna Waczynska (Wroclaw University of Science and Technology)
      • 14:30
        Benchmark of Generative Adversarial Networks for Fast HEP Calorimeter Simulations 15m

        Accurate simulations of elementary particles in High Energy Physics (HEP) detectors are fundamental to accurately reproduce and interpret the experimental results and to correctly reconstruct particle flows. Today, detector simulations typically rely on Monte Carlo-based methods which are extremely demanding in terms of computing resources. The need for simulated data at future experiments - like the ones that will run at the High Luminosity LHC (HL-LHC) - are expected to increase by orders of magnitude. This expectation motivates the research for alternative deep learning based simulation strategies.
        In this research we speed up HEP detector simulations for the specific case of calorimeters using Generative Adversarial Networks with a huge factor of over $100\,000$x compared to the standard Monte Carlo simulations. This could only be achieved by designing smart convolutional 2D network architectures for generating 3D images representing the detector volume. Detailed physics evaluation shows an accuracy similar to the Monte Carlo simulation.
        Furthermore, we quantized the data format for the neural network architecture (usually INT32) with the novel Intel Low Precision Optimization tool (LPOT) to a reduced precision (INT8) data format. This resulted in an additional $1.8$x speed up on modern Intel hardware while maintaining the physics accuracy. These excellent results consolidate the beneficial use of GANs for future fast detector simulations.

        Speaker: Florian Rehm (CERN, RWTH Aachen University (DE))
      • 14:45
        Identification of news text corpora influencing the volatility of financial instruments 15m

        Identifying news that affects financial markets is an important task on the way to predicting financial markets. A large number of articles are devoted to this topic. But the main problem for analyzing news is neural networks what used. These neural networks are created to analyze user reports about a particular object, be it a restaurant, a movie or a purchased item. In such reports, the emotional component of the report prevails, for which neural networks are trained. The use of these networks for the analysis of classic news texts does not give a positive result, articles are defined as neutral, which is often not true.

        To solve the problem of analysis, you need to create your own dataset, on which to train neural networks. Tens of thousands of news are published daily, the processing of which is the processing of large volumes of distributed data. It is necessary to highlight the pool of those news that led to an increase in the volatility of financial instruments. This work describes methods of processing large volumes of distributed data in order to isolate a pool of news, as well as text corpuses that are present in this news and affect the volatility of financial instruments.

        Speaker: alexey Stankus (-)
    • 13:30 15:00
      Computing for MegaScience Projects Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      • 13:30
        Visualization of Experimental Data in Web-based Virtual Reality 15m

        Technological advances in the field of virtual reality and personal computation in general brought us to the era of web-based virtual reality, where virtual environments can be accessed directly from web browsers and without the need of installation of any additional software. Such online virtual environments seem to be a promising tool for scientific data visualization. When accessed through appropriate hardware, such as VR headsets, they also offer full immersion and isolation from external influences.
        In this contribution, we present a prototype solution for a histogram visualization in online virtual environments. The prototype has been implemented using A-Frame framework for visualization, React.js for compositionality and JSROOT for histogram data acquisition. Its user interface is primarily adjusted to personal computers and VR headsets.

        Speakers: Štefan Korečko (DCI FEEI TU Košice, Slovakia) , Martin Vala (JINR) , Mr Martin Fekete (Department of Computers and Informatics, Faculty of Electrical Engineering and Informatics, Technical University of Košice)
      • 13:45
        Design and development of application software for MPD distributed computing infrastructure 15m

        Multi-Purpose Detector collaboration began using distributed computing for centralized Monte-Carlo generation in the mid of 2019. DIRAC Interware is used as a platform for the integration of heterogeneous distributed computing resources. Since that time workflows of job submission, data transfer, and storage were designed, tested, and successfully applied. Moreover, we observe the growth of interest in access to the computing system from users. One way to provide such access for the users is to allow them direct jobs submission to DIRAC. But direct access to the resources imposes high responsibility on the users and must be restricted. For this reason, another approach was chosen: to design and develop a dedicated application that collects requirements from a user and starts a required amount of the jobs. That approach requires additional efforts: elaboration of requirements, designing of application, and development. But, it allows greater control over workload submitted by other users, reducing possible failures and inefficient usage of resources.

        Speaker: Igor Pelevanyuk (Joint Institute for Nuclear Research)
      • 14:00
        Web based Event Display server for MPD/NICA experiment 15m

        There are different methods for monitoring engineering, network, and computer systems for high-energy physics experiments. As a rule, they have a common name - Event Display and include a whole range of monitoring and control systems. During the experiment, the facility operator should receive comprehensive information about detector performance in a well understandable and intuitive form to make required changes in the data collection process. In this paper, the possibility of the development of the Event Display based on a high-level programming language - JavaScript, built into any standard Internet browser was investigated. The Web application development was done using NodeJS as a back-end development platform, WebGL for 3D rendering and modern framework React. The work was carried out within the frame of the MPD (Multi-Purpose Detector) detector under construction of the NICA (Nuclotron-based Ion Collider fAcility) collider at the Joint Institute for Nuclear Research (JINR, Dubna, Russia).

        Speaker: Alexander Krylov
      • 14:15
        Performance Analysis and Optimization of MPDroot 15m

        We present analysis of performance of MPD data analysis/simulation software MPDroot by profilers and benchmarks.
        Based on this we draw preliminary conclusions and set perspectives for future optimization improvements.

        Speaker: Slavomir Hnatic (JINR)
      • 14:30
        Многопоточный режим моделирования событий в пакете BmnRoot 15m

        Для исследований на ускорительном комплексе NICA (ОИЯИ) необходимы эффективные и быстрые программные реализации алгоритмов моделирования и реконструкции событий. Созданный для эксперимента BM@N пакет BmnRoot базируется на среде ROOT, GEANT4 и объектно-ориентированном фреймворке FairRoot и включает инструменты для исследования характеристик детектора BM@N, а также восстановления и анализа данных. С анонсированием поддержки многопоточности GEANT4MT в ROOT и FairRoot появилась задача модификации кода BmnRoot под многопоточный режим. В итоге реализована модель параллелизма по событиям для моделирования. Представлены и обсуждаются этапы проделанной работы и результаты тестирования многопоточной версии.

        Работа поддержана грантом Российского фонда фундаментальных исследований 18-02-40104 мега.

        Speaker: Stepanova Margarita (SPbSU)
    • 13:30 15:00
      Distributed computing, HPC and ML for solving applied tasks 403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      • 13:30
        Concept of peer-to-peer caching database for transaction history storage as an alternative to blockchain in digital economy 15m

        The development of the digital economy implies storing the history of a large number of transactions of every citizen involved in business processes based on digital technologies, starting from receiving public and social services in electronic form and ending with consumption of electronic goods and services produced by e-business and e-commerce.
        If we look carefully at the data structure within a digital economy system, we can see that transactions are grouped with respect to a natural unique identifier of a citizen, which allows for efficient block distribution based on their hash across the node-segments of a scalable peer-to-peer NoSQL DBMS, eliminating the appearance of "hotspots", and the data itself can be easily represented in the form of "key-value" tuples and stored in a columnar structure, providing a quick search due to the gossip protocol that allows redirecting requests to the node whose responsibility range includes the hash of a specific unique identifier. Well, since we are talking about storing transaction history within the business processes of the digital economy, the key-value relationship is essentially a one-to-many relationship, in the context of a design focused on personalized information output for each specific user. Ensuring communication between users within groups and communities (many-to-many relationship) is also possible and can be implemented by means of secondary indexes, materialized representations or partial data redundancy through denormalization, depending on the power of the set that the data form, to ensure acceptable performance of sampling queries.
        If we talk about the task of quickly obtaining summary or aggregated statistical information, it is not difficult to solve it by loading the necessary data in the YARN cluster of the open technology platform Apache Hadoop, for example, in the processing environment of Random Access Memory SPARK, applying the principle of Resilient Distributed Datasets and basic concepts of building a pipeline of operations mapping, moving, sorting and convolution in the framework of functional programming.
        However, the relative simplicity of horizontal scaling of disk space, processing power and RAM, does not provide transactional scaling, as simultaneous access of a large number of users to the central database nodes, would make the bandwidth of the data network a bottleneck. Therefore, we need a peer-to-peer caching database that will store all relevant data for a particular user on their device and the closest peer-to-peer servers, based on selected proximity criteria according to a given set of features and attributes.
        If we rise to an empirical level, from the perspective of participants in the digital economy, it is a question of storing a set of facts. Facts in a database are immutable; once stored, they do not change. However, old facts may be replaced by new facts over time or due to circumstances. The state of the database is the value determined by the set of facts in effect at a given point in time. So, this analysis allows us to move on to a more detailed consideration of the architecture of the proposed peer-to-peer caching database design solution.
        A peer-to-peer client library (a peer-to-peer access library) is embedded into the client application and allows to get data from the peer-to-peer servers, cache data on the client device (to reduce the load on the peer-to-peer servers), while keeping such an important property as "final immutability", and also to exchange the peer-to-peer server lists between the clients.
        The peer-to-peer server provides data access by caching the necessary segments of the central database demanded by the connecting clients. Connection to a specific group (farm) of peer-to-peer servers is determined by specified criteria, which can be geolocation data, type of users, type of processes, type of transactions, etc. Peer-to-peer servers can exchange data segments with each other (peer-to-peer communications), and store as many data segments as the storage system quotas and limitations allow. In certain cases, a client application may also act as a peer-to-peer server, but there are threats of loss of data integrity and validity through the emergence of fake peer-to-peer servers on the network, created by hackers to discredit it.
        Records in the central database (if developers wish, in parallel to peer-to-peer servers) can be made by means of transactors, which accept write transactions and process them serially, ensuring guaranteed integrity until successful synchronization with the central database, due to the replication factor of the distributed network file system (odd number of servers greater than 3 (three) is recommended, to ensure a recording quorum), where open technology solutions based on Apache Hadoop HDFS or Apache Cassandra can be selected as the basis. However, HDFS fault tolerance will require the use of additional components such as Zookeeper, Zookeeper Failover Controller and Quorum Journal Manager.
        Access to the transactor is recommended as part of a service-oriented architecture, through REST-services that can be scaled by applying standard load-balancing technologies used in web server deployments. This approach allows providing access to the transactor through the usual HTTP protocol, and transactors themselves and the centralized database will be in an isolated network, access to which should be done via routing with the use of modern encryption technologies, and hacker attacks via HTTP protocol can be prevented by modern IPS systems, combining signature and heuristic approaches of malicious activity detection.
        According to the principles of organising access to the transactor, access to the central data repository can be easily organised as well. The proposed approach makes it possible to implement staggered isolation of the central database and cascading of network traffic through the use of peer-to-peer server farms and service-oriented architecture.
        In conclusion, it would be useful to note that the proposed concept of a distributed horizontally scalable and cascadable peer-to-peer caching database could become the basis for a modern, efficient, as well as easy-to-implement and maintain technological platform for the implementation of digital economy services in the Russian Federation.

        Speaker: Mikhail Belov (Dubna State Univeristy)
      • 13:45
        Direct computational experiment in storm hydrodynamics of marine objects 15m

        The paper presents and discusses a new computer toolkit for assessing the seaworthiness of a ship in stormy sailing conditions, intended for testing new design solutions for promising ocean-going ships and ships of unlimited ocean navigation, as well as organizing full-function simulators. The presented toolkit can be used by captains to select effective and safe sailing modes, as well as to train personnel.
        A modern computational experiment allows direct modeling of intense sea waves, which fully recreates the hydromechanics of external influences on the ship, and ensures its full positioning relative to the crests of storm waves. Such experiments require the use of high-performance computing technology based on GPGPU, as well as distributed data preparation and processing. Parameters of three structures of real sea waves, including extremely high heights, are determined as the initial conditions for modeling. The ship also possesses real dynamic characteristics, and she is capable to go by arbitrary courses in relation to waves. The possibility of dynamically changing the parameters of the ship during the experiment is provided. It is required to assess the motion state in various cases of ship loading, including emergency.
        The most important feature of the software is a full-fledged three-dimensional visualization of all storm waves, as well as the spatial position and trajectory of the ship and its parameters. A special experimental environment for engineering surveys for projected ships is created using the graphical tools of OpеnGL.
        This computer toolkit is considered within the framework of the concept of a virtual testbed.

        Speakers: Alexander Degtyarev (Professor) , Ivan Gankevich (Saint Petersburg State University)
        Vasily Khramushin
      • 14:00
        Development of a tool for interactive detailing of areas of objects for the strength modeling system 15m

        Introduction

        The development of technological progress makes increased demands on the strength properties of structural elements of buildings and structures, machines and mechanisms, and a decrease in their material consumption. This leads to the need for effective use of existing and creation of new methods of solid mechanics and the training of new highly qualified specialists.
        One of the most important tasks in the field of deformable body mechanics is the representation of an existing CAD model in the form of a finite element mesh. Such a transformation will allow for the final calculations and optimization of the mathematical parameters of the object and will allow solving the following problems:

        • Preservation of all geometric features of a complex object in its model.
        • Ability to make changes of any scale to every part of the model.
        • Support for complex operations with the object: local rescaling of big and small details, optimization of parameters.
        • Elimination of loss of information about the properties of objects.

        Our work aims to find the optimal way to solve these and other related problems.

        Related work

        Although various popular solutions already exist, both open-source (Freecad, Salome) and commercial (Comsol, Ansys [1, 2]), they nevertheless have their drawbacks. In particular, to make the user experience as easy as possible, integration with an intuitive visual interface is required. This approach will provide interactivity, as well as the speed of interaction and execution of operations. Its implementation will increase the speed and productivity of work, as well as reduce the requirements for the necessary user skills.

        Problem definition

        Various existing methods of numerical modelling are based on testing the technology's operability on separate boundary value problems and do not allow the transition from the description of the algorithm to its immediate application by end-users. Our goal is to create an environment suitable for easily integrating custom calculation methods into the underlying system we have implemented. At the same time, an important task is to support local scaling.

        System description

        We believe that the most optimal solution for the described problems is an implementation in the form of a computer-aided design system, modelling and processing of objects. To improve efficiency, the web service format was chosen [3, 4]. At this stage, the following prototype system was developed in Python:

        • The Jupyter framework, as well as the GMSH and VTK utilities, were chosen as the basis for the interface visualization;
        • To implement the automated design of 3D models, tools of the PythonOCC, VTKjs libraries were used;
        • The main functionality of numerical calculations was performed using the FEniCS library;
        • Containerization is done using Docker functionality.

        The proposed solution made it possible to introduce a step for interactive adjustment of the local mesh resolution between the step of modelling the object and its division into finite elements.
        Additionally, it should be noted that the system supports topological optimization of the object - it allows you to automatically optimize the model for several user-selected characteristics, and then compare it with the original one.
        The algorithm described in the presented work allows the creation of rather complex objects using only a mathematical description of the various parts. This gives us many opportunities to fine-tune the characteristics of the processed object. At the same time, the implementation of tools for managing the local mesh resolution will allow users to integrate their modern numerical methods, such as XFEM and GFEM, the use of which is not available in existing modelling packages.

        References

        1. Iakushkin O., Sedova O., Grishkin V. Jupyter extension for creating CAD designs and their subsequent analysis by the finite element method. CEUR Workshop Proceedings, vol 1787. RWTH Aahen University, 2016. P. 530-534.

        2. Iakushkin O., Sedova O. Creating CAD designs and performing their subsequent analysis using open source solutions in Python. AIP Conference Proceedings, vol 1922.2018. P. 140011.

        3. Iakushkin O., Kondratiuk A., Eremin A., Sedova O. Development of a Containerized System to Build Geometric Models and Perform Their Strength Analysis. ICAIT’2018: Proceedings of the 3rd International Conference on Applications in Information Technology. 2018. P. 146-149.

        4. Sedova O., Iakushkin O., Kondratiuk A. Creating a tool for stress computation with respect to surface defects. CEUR Workshop Proceedings, vol 2507. RWTH Aahen University, 2019. P. 371-375.

        Speaker: Egor Budlov
      • 14:15
        Detection of fertile soils based on satellite imagery processing 15m

        The paper proposes a method for detecting fertile soils based on the processing of satellite images. As a result of its application, a map of the location of fertile and infertile soils for a given region of the earth's surface is formed and the corresponding areas are calculated. Currently, data from most satellites are in the public domain and, as a rule, are multispectral images of the earth's surface. Access to this data is carried out through one or another service of access hub. The paper proposes a method for automatically obtaining the necessary data for the region of interest for specified periods of time.
        The method for detecting fertile soils is based on the fact that fertile soil includes areas covered with vegetation in the spring-summer period. Therefore, by measuring the spectral characteristics of these areas in the late autumn period, when there is no vegetation on them, it is possible to obtain objective parameters of fertile soils. For detection, a number of classifiers are being built that recognize two classes - fertile soil and sand, which is especially important when monitoring areas prone to desertification. The feature vector used for classification is a set of indices similar to the well-known NDVI index. This set of indices is calculated for each pixel of the image by its values in different spectral channels. Classifiers are implemented using CUDA parallel computing technology on a GPU. Based on the results of the experimental study, a classifier is selected that has shown the best characteristics of the recognition quality.

        Speaker: Dr Valery Grishkin (Saint Petersburg State University)
      • 14:30
        Potential of Neural Networks for Air Quality Sensor Data Processing and Analysis 15m

        Air quality sensors represent an emerging technology for air monitoring quality. Their main advantage is that they are significantly cheaper monitoring devices compared to standard monitoring equipment. Low-cost, mass-produced sensors have a potential to form much denser monitoring networks and provide more detailed information on air pollution distribution. The drawback of sensor air pollution monitoring lies in the lower quality of measurements than that of standard monitoring equipment. It is known that the quality of air pollution sensor measurements is negatively influenced by meteorological factors, such as temperature or humidity. Neural networks are a potentially valuable technique for processing monitoring data to transform sensor measurements, complemented with meteorological data, into more accurate estimations of pollutant concentrations. The second possible use of neural networks with sensor data is their application as a prediction and analysis tool.

        Speaker: Jan Bitta (VSB-TU Ostrava)
      • 14:45
        Air Pollution Modelling Using Spatial Analysis and Neural Networks 15m

        In a huge number of applications, air pollution dispersion modelling using standard Gaussian methodologies is an excessively data-intensive process that requires considerable computing power. Land Use Regression (LUR) represents an alternative modelling methodology. LUR presumes that pollution concentration is determined by factors obtained via spatial analysis. These factors are chosen on the basis of their ability to describe air pollution variability. Most LUR models take into account factors of pollution sources and of land cover. The main disadvantage of the LUR model is the lower level of accuracy in comparison with Gaussian air pollution models. In the presented study, there were created datasets of factors of emission data and of Gaussian model results data.

        Standard LUR models use linear regression for the estimation of concentrations. The coefficient of determination (R$^2$) of the standard LUR models reached 0.639 for emission data and 0.652 for Gaussian model results data. We assumed that linear regression did not sufficiently reflect generally non-linear phenomena. Therefore, linear regression in the LUR model was substituted by Artificial Neural Network (ANN)-based regression, which is able to capture non-linear behavior. The R$^2$ of the improved LUR models achieved 0.937 for the LUR model based on emission data and 0.938 for the model based on Gaussian model results. ANN-based non-linear regression LUR models provide a more accurate characterization of air pollution distribution than standard models.

        Speaker: Dr Vladislav Svozilík (JINR LIT)
    • 13:30 15:00
      Quantum information processing 310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      • 13:30
        Quantum control algorithm of imperfect knowledge bases of intelligent cognitive controllers 15m

        The quantum self-organization algorithm model of wise knowledge base design for intelligent fuzzy controllers with required robust level considered. Background of the model is a new model of quantum inference based on quantum genetic algorithm. Quantum genetic algorithm applied on line for the quantum correlation’s type searching between unknown solutions in quantum superposition of imperfect knowledge bases of intelligent controllers designed on soft computing. Disturbance conditions of analytical information-thermodynamic trade-off interrelations between main control quality measures (as new design laws) discussed. The smart control design with guaranteed achievement of these trade-off interrelations is main goal for quantum self-organization algorithm of imperfect KB. Sophisticated synergetic quantum information effect introduced: a new robust smart controller on line designed from responses on unpredicted control situations of any imperfect KB applying quantum hidden information extracted from quantum correlation. Within the toolkit of classical intelligent control the achievement of the similar synergetic information effect is impossible. Benchmarks of intelligent cognitive robotic control applications considered.
        Keywords: quantum genetic algorithm, intellgent control, cognitive robotics

        Speaker: Prof. Sergey Ulyanov (professor)
      • 13:45
        Quantum Machine Learning for HEP detectors simulations 15m

        Quantum Machine Learning is one of the most promising applications on near-term quantum devices which possess the potential to solve problems faster than traditional computers. Classical Machine Learning is taking up a significant role in particle physics to speed up detector simulations. Generative Adversarial Networks (GANs) have scientifically proven to achieve a similar level of accuracy compared to the usual simulations while decreasing the computation time by orders of magnitude.
        In this research we are going one step further and apply quantum computing to GAN-based detector simulations.
        Given the practical limitations of current quantum hardware in terms of number of qubits, connectivity and coherence time, we performed first initial tests with a simplified GAN model running on quantum simulators. The model is a classical-quantum hybrid ansatz. It consists of a quantum generator, defined as a parameterised circuit based on single and two qubit gates, and a classical discriminator network.
        Our initial qGAN prototype focuses on a one-dimensional toy-distribution, representing the energy deposited in a detector by a single particle. It uses three qubits and achieves high physics accuracy thanks to hyper-parameter optimisation. A second qGAN is developed to simulate 2D images with a 64 pixel resolution, representing the energy patterns in the detector. Different quantum ansatzes are studied. We obtained the best results using a tree tensor network architecture with six qubits.
        Additionally, we discuss challenges and potential benefits of quantum computing as well as our plans for future development.

        Speaker: Florian Rehm (CERN, RWTH Aachen University (DE))
      • 14:00
        EFFECTIVE ALGORITHM OF CALCULATING THE WIGNER FUNCTION FOR A QUANTUM SYSTEM WITH A POLYNOMIAL POTENTIAL 15m

        When considering quantum systems in phase space, the Wigner function is used as a
        function of quasidensity of probabilities. Finding the Wigner function is related to the calculation
        of the Fourier transform from a certain composition of wave functions of the corresponding
        quantum system. As a rule, knowledge of the Wigner function is not the ultimate goal, and
        calculations of mean values of different quantum characteristics of the system are required.
        The explicit solution of the Schrödinger equation can be obtained only for a narrow class
        of potentials, so in most cases it is necessary to use numerical methods for finding wave
        functions. As a result, finding the Wigner function is connected with the numerical integration of
        grid wave functions. When considering a one-dimensional system, the calculation of N 2 Fourier
        intervals from the grid wave function is required. To provide the necessary accuracy for wave
        functions corresponding to the higher states of the quantum system, a larger number of grid
        nodes is needed.
        The purpose of the given work was to construct a numerical-analytical method for finding
        the Wigner function, which allows one to significantly reduce the number of computational
        operations. Quantum systems with polynomial potentials, for which the Wigner function is
        represented as a series in some certain functions, were considered.
        The results described were obtained within a unified consideration of classical and
        quantum systems in the generalized phase space on the basis of the infinite self-interlocking
        chain of Vlasov equations. It is essential that using the apparatus of quantum mechanics in the
        phase space, one can estimate the required parameters of quantum systems, and the proposed
        numerical methods make it possible to perform such calculations efficiently. The availability of
        exact solutions to model nonlinear systems plays a cardinal role in designing complex physical
        facilities, for example, such as the SPD detector of the NICA project. Such solutions are used as
        tests when writing a program code and can also be encapsulated in finite difference schemes
        within the numerical solution of boundary value problems for nonlinear differential equations.
        The proposed efficient numerical algorithm can be applied to solve the Schrödinger equation and
        the magnetostatics problem in a region with a non-smooth boundary.
        The work was supported by the RFBR grant No. 18-29-10014.

        Speaker: Evgeny Perepelkin (JINR)
      • 14:15
        Classical Fisher information for the state space of N-level systems through the Wigner function 15m

        The studies of the geometrical aspects of the quantum information grow very actual owing to practical purposes.
        Due to a request coming from the quantum technology, formulation of the quantum estimation theory turn to be in the frontier of a modern research. Particularly, the issue of interrelations between the phase space quasidistributions and classical Fisher metric are of current interest.
        Our studies are devoted to this issue and in the report we claim a representation of the classical Fisher metric corresponding to a quantum system in states admitting description in terms of a positive definite Wigner function.

        Speaker: Vahagn Abgaryan (JINR LTP)
      • 14:30
        Describing quantumness of qubits and qutrits by Wigner function’s negativity 15m

        According to modern views, the Wigner quasiprobability distribution provides a qualitative information on many quantum phenomena occurring in diverse physical systems. The Wigner function has all the properties of statistical distributions except one: taking negative values for some quantum states, the Wigner function turns to be not a proper distribution, and hence it indicates the existence of truly quantum features which cannot be described within the classical statistical paradigm. Deviation of the Wigner quasiprobability distribution from a proper statistical distribution of a physical system is interpreted as an evidence of non-classicality, or quantumness. In this report, based on the recently elaborated method of construction of the Wigner function of a finite dimensional system, we will discuss the following measures/indicators for quantification of non-classicality of a finite-dimensional system: 1. The negativity probability defined for an arbitrary ensemble of a random quantum state as the ratio of the number of states with negative Wigner functions to the total number of generated states. 2. KZ-indicator introduced by A. Kenfack and K. Zyczkowski and defined as an integral over the phase-space manifold of the absolute value of the Wigner function. 3. Global indicator of non-classicality defined as the ratio of the volume of orbit space of a state space with non-negative Wigner function to the volume of total orbit space. It is assumed that the volume is calculated with respect to a Riemannian metric induced by mapping of a state space to the orbit space. All the above mentioned non-classicality measures will be exemplified by considering the Hilbert-Schmidt ensemble of qubits and qutrits.

        Speaker: Astghik Torosyan (LIT)
      • 14:45
        On the geometry of non-maximal strata qudit space with Bures metric 15m

        Modern applications of quantum mechanics renewed interest in the properties of the set of density matrices of finite size. The issue of establishing of Riemannian structures on the quantum counterparts of space of probability measures became a subject of recent investigations.
        We study quantum analogues of a well-known, natural Riemannian metric, the so-called Fisher metric. Explicit formulae for the Bures metric are known for special cases: e.g. J. Dittmann has derived several explicit formulae on the manifold of finite-dimensional non-singular density matrices. However, owing to the nontrivial differential geometry of the state space, studies of its Riemannian structures require a refined analysis for the non maximal rank density matrices. We calculate the metric and discuss several geometric properties of the qudit state space.

        Speaker: Martin Bures (IEAP, CTU Prague, Czechia & JINR Dubna, Russia)
    • 15:00 15:20
      Coffee 20m
    • 15:20 16:35
      Big data Analytics and Machine learning. 407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      • 15:20
        Classification of lung X-rays with pneumonia disease using deep learning models 15m

        Pneumonia is a life-threatening lung disease caused by either a bacterial or viral infection. It can be life-threatening if not acted on at the right time, and so early diagnosis of pneumonia is vital. The aim of this work is the automatic detection of bacterial and viral pneumonia on the basis of X-ray images. Four different pre-trained deep convolutional neural networks (CNN): VGG16, ResNet50, DenseNet201, and MobileNet_v2 were used to classify X-rays images. 3,948 chest X-rays consisting of bacterial, viral, and normal chest X-rays were used, and using preprocessing techniques, the modified images were trained in the classification task. Thus, the proposed study may be useful for faster diagnosis of pneumonia by a radiologist and may help in the rapid screening of patients with pneumonia.

        Speaker: Prof. EUGENE SHCHETININ (Financial University under the Government of Russian the Federation)
      • 15:35
        Forecasting and assessment of land conditions using neural networks 15m

        This paper proposes a method for predicting and assessing land conditions based on satellite image processing using neural networks. In some regions, mainly based on agriculture and cattle breeding, the threat of irreversible soil changes has appeared, in particular desertification, which can lead to serious environmental and economic problems. Therefore, it is necessary to identify both the current state of land and to predict its state in the near future in order to assess the need for preventive work and its scale.

        The essence of the proposed method is the application of neural networks for both prediction and segmentation of land images by type. This process can be divided into two stages: 

        1. Predicting what the situation will be in a few years;
        2. Segmentation of the original and predicted images to determine quantitative and qualitative changes. 

        In other words, first the possible state of the lands of interest in a certain time interval is predicted from the available image, and then the changes are evaluated. We propose to use a neural network of Encoder-Decoder type for prediction and a network of U-Net type for segmentation.

        Since there are no training datasets for land state estimation, we generated them ourselves. Images from the open database of Sentinel-2 satellite were chosen as the data source. The choice of this source is due to the fact that the results of primary segmentation of satellite images for specified regions are also stored in the specified database. Images are pre-segmented into 11 basic classes: vegetation, bare soil, water, etc. A second network is trained precisely to determine these 11 classes. 

        Satellite images of the area where desertification processes are particularly prominent were downloaded for the work. The data refer mainly to the summer period of 2017 - 2020. Thus, the predictive network can make a prediction for 3 years ahead, but it is possible to repeat the procedure and find out what could be in 6 years.

        The paper presents a method that allows not only to analyze the state of the areas of interest for the past time, but also to assess the situation that has not yet come.

        Speakers: Ms Anastasiia Khokhriakova, Mr Valery Grishkin
      • 15:50
        Using distributed computing systems to solve the problem of image classification using deep neural networks 15m

        Machine learning methods and, in particular, deep neural networks are often used to solve the problem of image classification. There is a tendency to increase the amount of training data and the size of neural networks. The process of training a deep neural network with millions parameters can take hundreds of hours on modern computing nodes. Parallel and distributed computing can be used to reduce the learning time. The extensive scaling capabilities of grid systems and the ease of connecting new computing nodes can significantly reduce the training time of deep neural networks. But at the same time, we should take into account the specifics of data exchange between the nodes of the grid system. Using the example of the problem of classifying a large set of images, we propose methods for organizing distributed deep learning.

        Speaker: Ilya Kurochkin (IITP RAS)
      • 16:05
        High resolution image processing and land cover classification for hydro-geomorphological high-risk area monitoring 15m

        High-resolution images processing for land-surface monitoring is fundamental to analyse the impact of different geomorphological processes on earth surface for different climate change scenarios. In this context, photogrammetry is one of the most reliable techniques to generate high-resolution topographic data, being key to territorial mapping and change detection analysis of landforms in hydro-geomorphological high-risk areas.
        An important issue arises as soon as the main goal is to conduct analyses over extended areas of the Earth surface (such as fluvial systems) in short times, since the need to capture large datasets to develop detailed topographic models may limit the photogrammetric process, due to the high demand of high-performance hardware.
        In order to investigate the best set up of computing resources for these very peculiar tasks, a study of the performance of a photogrammetric workflow based on a FOSS (Free Open-Source Software) SfM (Structure from Motion) algorithm using different cluster configurations was performed leveraging the computing power of ReCaS-Bari data center infrastructure, which hosts several services such as HTC, HPC, IaaS, PaaS.
        The selected research areas are located along the hilly plain of the Basento river near Ferrandina (MT), in the Basilicata region of southeastern Italy. The aerial images were acquired as sequences of shots collected by low altitude ($\sim{50}$ m above ground level of the take-off location) UAV flight missions. Two datasets made of 1139 and 2190 images respectively were used for our investigation. Each image has a very high resolution ($\sim1.09$ cm/pixel, $\sim10$ MB) resulting in quite demanding computing tasks to generate the orthophotomosaic, the dense point cloud and DEM (Digital Elevation Model) of the detected area in the shortest lapse of time.
        In the case of this study, the resulting output is key to recognize the flooding hazard (through the monitoring of the river conditions, the identification of the channel alterations and morphological changes) and to timely plan the management activities of the emergency after a catastrophic event, with significant time and cost savings.
        The high performance computing automated photogrammetric workflow fits the scope of direct intervention to safeguard the environment and people's safety, assessing the future scenarios of environmental damage as a function of sudden climate changes.
        In our study the photogrammetric workflow was deployed on a HTC cluster composed of 128 servers for a total of about 8000 CPU core, with 4GB of RAM per core, and 4PB of parallel disk space. Each computing server, containing up to 64 slots, can access all ReCaS-Bari disk space, at a speed of 10 Gbps. The GPFS distributed file system is used for storage management. The operating system used is CentOS 7, and the queues are managed by the HTCondor batch system. A parallel study was also run using the new GPU ReCaS-Bari cluster exploiting one single server that hosts 4 GPUs, 96 CPU cores, 750GB of RAM and 5.9 TB of SSD.
        Leveraging the high-computing resources available at clusters and a specific set up for the workflow steps, an important reduction of several hours in the processing time was recorded, especially compared to classic photogrammetric programs processed on a single workstation with commercial softwares.
        The high quality of the image details can be used for land cover classification and preliminary change detection studies using Machine Learning techniques. A subset of the whole image dataset has been considered to test the performance of several Convolutional Neural Networks using progressively more complex layer sequences, data augmentation and callback functions for training the models. All the results are given in terms of model accuracy and loss and performance evaluations.

        Speaker: Giorgia Miniello (UNIVERSITA' DEGLI STUDI DI BARI E INFN BARI)
      • 16:20
        SQL query execution optimization on Spark SQL 15m

        The Spark – Hadoop ecosystem includes a wide variety of different components and can be integrated with any tool required for Big Data nowadays. From release-to-release developers of these frameworks optimize the inner work of components and make their usage more flexible and elaborate.
        Anyway, since inventing MapReduce as a programming model and the first Hadoop releases data skew was and remains the main problem of distributed data processing. Data skew leads to performance degradation i.e., common slowdown of application execution and idle of the resources. The newest Spark framework versions allow handling this situation easily from the box. However, there is no opportunity to upgrade versions of tools and appropriate logic in the case of huge projects in which development was started years ago.
        In this article, we consider approaches to execution optimization of SQL query in case of data skew on concrete example with HDFS and Spark SQL 2.3.2 version usage.

        Speaker: Gleb Mozhaiskii
    • 15:20 17:00
      Computing for MegaScience Projects Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      • 15:20
        Missing Mass Method for reconstruction of short-lived particles in the CBM and STAR experiments 15m

        The search for short-lived particles is an important part of the physics research in experiments with relativistic heavy ions.
        Such investigations mainly study decays of neutral particles into charged daughter particles, which can be already registered in the detector system. In order to find, select and study the properties of such short-lived particles in real time in the CBM experiment (FAIR/GSI, Germany), we have developed a package of algorithms KF Particle Finder, which contains a search for more than 150 decay channels.
        Of great physics interest are also the decays of short-lived charged particles, when one of the daughter particles is neutral and cannot be registered in the detector system. To find and study such decays, we have extended the KF Particle Finder package by implementing the missing mass method, which is based on the conservation of energy and momentum laws.
        The method was studied in detail on simulated data of the CBM experiment, showing high efficiency with a large signal-to-background ratio, as well as high significance.
        As part of the FAIR Phase-0 program, the KF Particle Finder package of algorithms has been adapted for online and offline processing in the STAR experiment (BNL, USA).
        Based on the STAR HLT computer farm, we have created an express data production chain that extends the high-level trigger (HLT) functionality in real time all the way to physics analysis.
        An important advantage of express analysis is that it allows us to start calibrating, producing, and analyzing data as soon as it is collected. Therefore, the use of express analysis is extremely useful for data production in the BES-II physics program and will help accelerate scientific discovery by helping to produce results within a year of data collection completion.
        Here we describe and discuss in detail the missing mass method for finding and analyzing short-lived particles. Features of the application of the method to both simulated data in the CBM experiment and in the STAR experiment as part of real-time express data processing are given, as well as the results of real-time reconstruction of short-lived particle decays in the BES-II environment of the STAR experiment.

        Speaker: Pavel Kisel (Frankfurt Uni, JINR)
      • 15:35
        Neural Networks in Modeling Beam Dynamics using Taylor Mapping 15m

        The paper describes method for modeling beam dynamics based on the calculation of ordinary differential equations with Taylor mapping. This method allows you to get the solutions of the system both in symbolic and numerical form. Using numerical simulation methods, one can obtain partial solutions of beam dynamics process. The paper considers the possibility of solving the inverse problem - finding a general solution based on the obtained private data using machine learning methods. The solution of this problem will allow to predict the dynamics of the beam more accurately, and help to manage the control systems settings .

        Speaker: Nataliia Kulabukhova (Saint Petersburg State University)
      • 15:50
        Grammar parser-based solution for the description of the computational graph within GNA frameworok 15m

        The data flow paradigm has established itself as a powerful approach to the machine learning. Indeed, it is also very powerful for the computational physics, although it is not used as much in the field. One of the complications is that physical models are much less homogeneous compared to ML, which makes their description a quite complicated task.
        In this talk we present a syntax analyzer for the GNA framework (developed at DLNP). The framework is designed to build mathematical models as directed acyclic graphs. The syntax analyzer introduces a way for a concise description and configuration of the models using math-like syntax, providing scalability and branching even in non-homogeneous cases.
        The goal of the project is to develop a technique and a software to facilitate a generic analysis and input data description compatible with multiple backends (e.g. GNA).

        Speaker: Nikita Tsegelnik (BLTP, JINR)
      • 16:05
        Improvements of the LOOT model for primary vertex finding based on the analysis of development results 15m

        The recognition of particle trajectories (tracks) from experimental measurements plays a key role in the reconstruction of events in experimental high-energy physics. Knowledge about the primary vertex of an event can significantly improve the quality of track reconstruction. To solve the problem of primary vertex finding in the BESIII inner tracking detector we applied the LOOT program which is a deep convolutional neural network that processes all event hits at once, like a three-dimensional image. We used mean absolute error to measure the quality of the trained model, but a thorough analysis of the results showed that this metric by itself is inadequate without considering output distributions of the vertex coordinates. Correcting all errors allowed us to propose special corrections to the loss function that gave quite acceptable results. The process of our problem investigation and its outcomes are presented.

        Speaker: Ekaterina Rezvaya
      • 16:20
        TrackNETv3 with optimized inference for BM@N tracking 15m

        There are local and global approaches to do track reconstruction depending on the amount of input data available for training a neural network model that solves the reconstruction problem. Global methods need access to all tracks in an event that results in a high memory footprint. We have successfully applied the recurrent neural network (RNN) TrackNETv2 and its updated version v2.1 to the problem of track reconstruction of the Monte-Carlo simulation for BM@N RUN6 and BESIII experiments. We found that the method of training for BM@N using buckets of tracks seems to be unnatural to RNNs, so we make a few improvements to the training procedure, inspired by widely-known language modeling systems. Also, we proposed a few vectorization tricks to speed up the inference phase of the model. The ghost filtration step was also modified to utilize the information about the rough location of the event primary vertex. The new TrackNETv3 program and preliminary results of its testing on the Monte-Carlo simulations of BM@N RUN7 are presented

        Speaker: Anastasiia Nikolskaia
    • 15:20 18:20
      Distributed computing, HPC and ML for solving applied tasks 403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      • 15:20
        Efficient gossip-based protocol in the Neo blockchain network 15m

        Epidemic algorithms are widely explored in the case of distributed systems based on trustful environments. However, an assumption on arbitrary peers behaviour in Byzantine fault tolerance problem calls into question the appropriateness of well-studied gossip algorithms since some of them are based on aggregated network information, i.e. the number of nodes in the network, etc. Given this problem, the task of designing effective, scalable and reliable gossip-based network protocol for blockchain systems still remains a tricky one. In this study, we analyze the performance and impact on reliability and consistency of the gossip-based network protocol in the Neo blockchain and propose, implement and evaluate protocol improvements to reduce this impact. The enhanced protocol implementation is tested on a 100-node Neo cluster and allows achieving significant reduction in the network traffic consumption and improving message delivery probability on the experimental network.

        Speaker: Anna Shaleva
      • 15:35
        Comparative analysis and applicability determination for several DLT solutions 15m

        Potential benefits of implementation of distributed ledger technology are widely discussed among different business actors and governmental structures. Within the last decade, with growing popularity of blockchain-based payment systems and cryptocurrencies, these discussions considerably sharpened. Therefore, an extensive body of research has emerged on this soil. The goal of this study is to attempt to make a comparative analysis of several existing blockchain-based distributed ledger platforms. Besides that, authors overview the most commonly used consensus algorithms and design approaches, as for any blockchain product, consensus algorithm is a crucial part which determines the performance of the overall system. Choosing the right algorithm would ensure high reliability and throughput, while the wrong choice could cause fatal malfunctions for the application. A suitable algorithm usually should be chosen according to the task in consideration, e.g. Nakamoto-style protocols could be considered better for public networks, while multiround voting protocols are more suitable for private and secure systems. The highest attention is paid to consensus algorithms based on the solution of the Byzantine Fault Tolerance problem (BFT).

        Byzantine Fault Tolerance problem, distributed ledger technology, multiround voting protocols

        Speaker: Mr Anar Faradzhov (Saint Petersburg State University)
      • 15:50
        Deep learning for automatic RF-modulation classification 15m

        Classical methods use statistical-moments to determine the type of
        modulation in question. This essentially correct approach for discerning
        amplitude modulation (AM) from frequency modulation (FM), fails for more
        demanding cases such as AM vs. AM-LSB (lower side-band rejection) - radio
        signals being richer in information that statistical moments. Parameters
        with good discriminating power were selected in a data conditioning phase,
        and binary deep-learning classifiers were trained for AM-LSB vs. AM-USB,
        FM vs. AM, AM vs. AM-LSB, etc. The parameters were formed as features,
        from wave reconstruction primary parameters: rolling pedestal, amplitude,
        frequency and phase. Very encouraging results were obtained for AM-LSB vs.
        AM-USB with stochastic training, showing that this particularly difficult
        case (inaccessible with stochastic moments) is well solvable with multi-
        layer perceptron (MLP) neuromorphic software.

        Speaker: M. Dima (University of Bucharest)
      • 16:05
        NARX neuromorphic software in ECG wave prediction 15m

        We present an approach to predict ECG waves with non-linear autoregressive
        exogenous neuromorphic (NARX) software. These predictions are important in
        comparing the underlying QRS complex of the ECG-wave with the slowly
        deteriorating waves (or arrythmia) in cardiac patients. A deep Q-wave for
        instance (such as 1/4 of the R-wave) is a typical sign of (inferior wall)
        myocardial necrosis - associated in most cases with vascular dysfunction.
        It is important to have a rolling predictor - slow ECG wave degradation
        being normal. A real-time predictor takes into account a suite of
        influencing parameters (body temperature, effort, current medication,
        sugar levels, stress, etc), being much better suited in making a call for
        "normal" vs. "anomalous" ECG waves, rather than some outdated reference
        waves. Although this research is in its begining, it shows encouraging
        results, which clinical studies can conclude as to how effective the
        approach may be.

        Speaker: T. Dima (University of Bucharest)
      • 16:20
        Explanation of NMR mobility of peptide dendrimers using distributed computing 15m

        We applied distributed computing to study new peptide dendrimers with Lys-2Lys and Lys-2Arg repeating units in water. These molecules are promising nanocontainers for the drug and gene delivery. The dendrimers have recently been synthesized and studied by NMR (Sci. Reports, 2018, 8, 8916; RSC Advances, 2019, 9, 18018) and successfully tested as carriers for gene delivery (Bioorg. Chem., 2020, 95, 103504; Int. J. Mol. Sci., 2020, 21, 3138). Both dendrimers have approximately the same molecular weight and the same charge. However it was found by NMR that the orientational mobility of HH vectors in CH2-N groups of side chains of 2Arg spacers in Lys2Arg dendrimers is close to slow mobility of the inner (main chain) CH2-N groups of branched Lys residues of both dendrimers, while mobility of CH2-N groups in side chains of 2Lys spacers of Lys2Lys dendrimer is close to fast mobility of the Lys terminal groups of both dendrimers. It has been suggested that the reason of unexpected slowdown of side CH2-N groups in 2Arg spacers of Lys2Arg dendrimer is caused by the Arg-Arg pairing effect in water, which could lead to long-living pairs of Arg residues belonging to different dendrimer branches. The other possible reason could be a semiflexibility effect, because the distance from the end of side chain of spacers to NMR active CH2-N groups in 2Arg and 2Lys spacers is different. We used the molecular dynamics simulation and Gromacs package in order to check possible contribution of both effects. It was obtained that size and shape of Lys-2Lys and Lys-2Arg dendrimers are similar. All other structural characteristics, including radial density and radial charge profiles are also similar. We found that the similar internal groups have similar slow mobility and similar terminal groups have similar fast mobility in both dendrimers. Mobility obtained from simulation results are also very close to that obtained in NMR experiment. However, the orientational mobility of the H-H vector in side CH2-N groups of 2Arg spacers in Lys-2Arg dendrimer is significantly slower than the mobility of the simmilar vector of 2Lys spacer in the Lys-2Lys dendrimer. Exactly the same result was obtained earlier in NMR experiment. We revealed that this difference is not due to the arginine-arginine pairing, but is due to the semiflexibility effect associated with the different contour length from CH2-N group to the end of the side arginine or lysine segment in spacers.
        This work was supported by RSCF grant 19-13-00087. All calculations were perfomed using computer facilities of SPbSU and MSU Supercomputer Centers.

        Speaker: Oleg Shavykin (St. Petersburg State University, 7/9 Universitetskaya nab., 199034 St. Petersburg, Russia; ITMO University, Kronverkskiy pr. 49, 197101 St. Petersburg, Russia; Tver State University, Zhelyabova 33, Tver, Russia)
      • 16:35
        Fractal thermodynamics, big data and its 3D visualization 15m

        The need for big data analysis takes place in many areas of science and technology: economics, medicine, geophysics, astronomy, particle physics and many others.
        This task is greatly simplified if big data has structural patterns. In this talk, we will consider the case when big data with a high degree of accuracy are fractals.
        We propose to analyze the fractal structure of big data based on the fractal thermodynamics model. In this model, the state parameters fractal entropy Sf and fractal temperature Tf are introduced. These parameters are functions of fractal volume and fractal dimension.
        In the fractal thermodynamics model, the parameters Sf and Tf are related by fractal equations of state (FES):
        S_f=B∙T_f^γ,
        The value of the exponent γ will be called the FES index. This parameter is the most important characteristic of the fractal structure.
        As a specific example of the big data, consider the quantum phase spaces of instantaneous heart rhytm (IHR) Sq, built on the basis of large data on RR-intervals of daily Holter monitoring (HM) of patients from the Tver Regional Clinical Hospital (TRCH).
        Our estimates of the parameter δ for Sq for all patients considered in this work gave a value of no more than 1%. Hence, it follows that with a high degree of accuracy the set Sq is a fractal and, therefore, the method of fractal thermodynamics can be applied to the study of its structure.
        The fractal diagram of state SfTf allows visualizing the character of the functional dependence of Sf on Tf. All the states of the SfTf diagram occupy a narrow band with a width of 1.3 and a length of 127 dimensionless units.
        3D visualization of the big data of HM allows to represent digital information on the array of RR-intervals in three-dimensional space and consequently in an informative form for analysis purposes.
        In this report represent, the 3D histograms of the QPS of ICR of patients of Tver Regional Clinical Hospital constructed using the Maple program system according to the data from 24-hour HM.

        Speaker: Victor Tsvetkov (Tver State University)
      • 16:50
        Extraction of traffic features in Software Defined Networks using an SDN Controller 15m

        Machine learning methods can be used to solve the problems of detecting and countering attacks on software-defined networks. For such methods, it is necessary to prepare a large amount of initial data for training. Mininet is used as a modeling environment for SDN. The main tasks of modeling a software-defined network are studying traffic within the network, as well as practicing various scenarios of attacks on network elements. The SDN controller ONOS (Open Network Operating System) is used as the network controller. Various network topologies are considered in the modeling. In addition to the tree network topology, Fattree, Dragonfly, Jellyfish network topologies are used, which have several alternative data transfer routes between one pair of nodes. During the modeling, nodes (hosts) are created. Hosts number depends on the configuration. Then these nodes are networked using a set of virtual switches. Direct communication between nodes is also specified in the configuration. Once the SDN is initialized, the hosts begin streaming according to scripted scenario. The possibility of analyzing information about traffic within the network using an SDN controller in real time is investigated, as well as the possibility of collecting information in the form of a set of features. Modeling of software-defined networks under different initial conditions and for different attack scenarios can be carried out on a distributed computing system. Since the computational problem to be solved can be divided according to the data into many autonomous tasks, it is possible to use grid systems from personal computers and voluntary computations to speed up the process.

        This work was funded by RFBR according to the research project No. 18-29-03264

        Speaker: Mr Sergey Volkov (Peoples' Friendship University of Russia (RUDN University); Federal Research Center "Computer Science and Control" RAS)
      • 17:05
        Разработка и исследование распределенных алгоритмов управления системами роевого интеллекта 15m

        Настоящая работа посвящена разработке и исследованию методов управления коллективным поведением в роевых робототехнических системах на примере решения модельной задачи уборки роем роботов заданной ограниченной территории. В работе рассматривается несколько распределенных алгоритмов решения поставленной задачи, основанных на различных классических методах и моделях роевого интеллекта. Описывается моделирующая программная система с графическим пользовательским интерфейсом для визуализации процессов работы предлагаемых алгоритмов. Приводятся результаты компьютерных экспериментов по сравнению эффективности всех предложенных алгоритмов. Работа выполнена при финансовой поддержке РФФИ (грант № 20-07-01053 А).

        Speaker: Mr Артем Горемыкин (университет "Дубна")
      • 17:20
        Применение методов машинного обучения для задачи распознавания русских дореволюционных печатных текстов 15m

        Настоящая работа посвящена вопросам применения технологий оптического распознавания символов и методов машинного обучения для распознавания печатных русскоязычных текстов XIX века. Анализируются особенности данной задачи по сравнению с общей задачей оптического распознавания символов. Проводится обзор существующих методов и программ для решения рассматриваемой проблемы. Предлагается свой адаптивный подход к построению программной системы распознавания подобных текстов на основе открытой платформы Tesseract. Приводятся предварительные результаты исследования эффективности предложенного подхода и сравнения с имеющимися решениями. Работа выполнена при финансовой поддержке РФФИ (грант № 20-07-01053 А).

        Speaker: Mr Владислав Федоров (ф-т ВМК МГУ им. М.В. Ломоносова)
      • 17:35
        Optimization of the computation of the multidimentional integrals for estimation of the meson lifetime 15m

        To calculate the lifetime of mesons in hot and dense nuclear matter, it
        is necessary to computate the 5-dimentional integrals with complicated
        integrand function. This work presents algorithms and methods for
        calculating complicated integrals based on the Monte-Carlo method. For
        optimization of computation the algorithm of parallel calculations was
        implemented in C++ programming language using OpenMP and NVIDIA CUDA
        technology. Calculations were performed on nodes with multicore CPUs and
        Intel Xeon Phi coprocessors and NVIDIA Tesla K40 accelerator installed
        within the heterogeneous cluster of the Laboratory of Information
        Technologies, Joint Institute for Nuclear Research, Dubna. As a result
        the lifetime of pion was calculated using all possible pion-pion
        scattering reactions.

        Speaker: Daviti Goderidze
    • 15:20 17:00
      Round table on IT technologies in education 310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      • 15:20
        National Research and Education Network of Russia: directions of development in the context of expanding of international cooperation 15m

        The report overviews the current state and key directions of the advanced development of the National Research and Education Network (NREN) of Russia for the period of 2021-2024. Unified NREN called National Research Computer Network (NIKS) was created in 2019 according to the results of integration of federal-level telecommunication networks in the fields of higher education (RUNNet) and science (RASNet). In 2021, the Ministry of Science and Higher Education of the Russian Federation approved regulatory documents, including the management procedure and the concept and the roadmap for the functioning and development of NIKS. The functions of the administrator and operator of NIKS are assigned to JSCC RAS; the network received formal status of Russian NREN. Works on the development of NIKS have been included in the National Project "Science and Universities" with the consolidation of activities, characteristics of the result and target indicators. The main emphasis in the contribution is made on the discussion of the current state of foreign connectivity of NIKS, the planned expansion of international cooperation with NRENs of the EAEU and BRICS countries, the intensification of network and service interaction with European GÉANT and NORDUnet consortia. Special attention is also paid to the possibilities of NIKS to provide telecommunications services with increased requirements for the QoS to leading Russian scientific centers (NRC Kurchatov Institute, JINR, BINP SB RAS, IKI RAS and some others) to ensure participation in international MegaScience projects in the fields of high energy physics, astronomy, Earth observation, etc. (e.g. LHC, ITER, XFEL, FAIR, LIGO, Belle II, SKA).

        The work was prepared at the JSCC RAS within the framework of the state assignment no 0580-2021-0014.

        Speaker: Dr Alexey Abramov (Joint SuperComputer Center of the Russian Academy of Science)
      • 15:35
        The concept of training IT professionals in cross-cutting digital technologies 15m

        The formation of a new generation of digital technologies, which were called «cross-cutting» due to the scale and depth of their influence, determined a large-scale transformation of business and social models. These changes have a strong impact on the content of professional activity: new skills are required from employees, and therefore new competencies. The rapid digitalization of the economy requires qualified specialists.
        Currently there is a profoundly serious shortage of IT specialists necessary for the development of national projects in Russia. The timely updating of higher education programs that meet global trends, considering the most popular technologies is of particular importance. These technologies and their sub-technologies are described in the roadmaps created within the framework of the national program "Digital Economy of the Russian Federation": neurotechnology and artificial intelligence, virtual and augmented reality systems, distributed ledger technologies, quantum technologies, new production technologies, robotics and sensor components, wireless communication.
        The report will present the system of training of highly qualified IT specialists in cross-cutting digital technologies at the Institute of System Analysis and Management (ISAM) of the State Dubna University. The features of the formation and development of the student’s relevant competencies and skills considering the areas of study at ISAM are discussed.

        Speaker: Prof. Evgenia Cheremisina (Dubna State University)
      • 15:50
        Обучение без учителя на данных узкой направленности агрегированных автоматически. 15m

        Информация, публикуемая пользователями в открытом доступе, может служить хорошим ресурсом для сбора данных при формировании датасетов для обучения нейронных сетей. Одной из самых больших существующих платформ для обмена фотографиями и видеозаписями является Instagram. Основным методом взаимодействия пользователей друг с другом на данной платформе является публикация изображений. При этом доступны такие функции как: добавление описания к изображению, хэштег, отметка рамкой на изображении и т.д.
        Это обстоятельство позволяет исследователям использовать Instagram для применения методов машинного обучения и анализа изображений.

        Данная платформа обладает рядом особенностей, которые могут быть использованы при создании обучающих датасетов для нейросетевых моделей, работающих с изображениями. С помощью хэштегов можно получать наборы размеченных данных по заданной тематике. Кроме того, метаданные, получаемые вместе с изображением, могут быть полезны для поиска связанных с тематикой профилей, уточнения разметки и автоматического взаимодействия с пользователями.

        Получение данных с платформы Instagram возможно несколькими общеизвестными способами: последовательность запросов к официальному API платформы, использование инструментов автоматизации работы веб-браузера и всевозможных веб-краулеров. Эксплуатация официального API обладает рядом ограничений, описанных в документации. Меняющийся интерфейс Instagram делает затруднительным применение таких инструментов автоматизации работ веб-браузера, как Selenium. В то время как использование веб-краулера показывает свою эффективность для сбора общедоступных изображений, а также связанных с ними метаданных.

        Полученные таким образом базы данных, после предварительной обработки могут быть использованы в качестве обучающих датасетов для нейронных сетей. Предварительная обработка такого рода датасетов необходима, поскольку данные могут быть сильно разрознены. С помощью моделей классификации возможно очистить датасет от ошибок “разметки”, допущенных пользователями в описаниях к публикуемых изображений.

        Нашей целью является создание автоматического приложения, способного создавать тематические датасеты по хэштегам и профилям. В функционал приложения также входит отслеживание хэштега и автоматическая публикация ответного сгенерированного изображения. Генерация осуществляться с помощью наложения маски, полученной нейронной сетью для сегментации объектов, однако, для улучшения качества сегментации, предварительно используется super-resolution модель.

        Проблема неточности разметки, полученной из метаданных, была решена с помощью self-supervised обучения модели. Применение метода DINO (self-distillation with no labels) с нейронной сетью, в основе которой лежит трансформер, способствовало улучшению качества сегментации.

        В результате была создана система, способная реагировать на события платформы Instagram. Ответ пользователю формируется с помощью нейронных моделей и автоматически публикуется.

        Speaker: Ekaterina Pavlova (Saint Petersburg State University)
      • 16:05
        Using data from the labor market for analysis and education 15m

        Education systems provide specialists of different levels and specialization for the labor market. However, in the modern dynamic world of artificial intelligence, pandemic, and remote work, the labor market evolves dramatically from year to year. Universities and colleges must keep track of these changes to adapt educational programs and manage the number of student slots offered for different specializations. Detailed demand statistics from the labor market is a good data source for analysis to gauge current needs and predict future demand. Usually, there is no single source of data about all vacancies and CVs. So, it is needed to collect, preprocess, analyze and visualize existing fragmented data. In this work, we study different raw data sources, their strong and weak points. A set of basic metrics is proposed to be derived from the data for analysis. Several types of roles are defined as primary users of the aimed system, their features and standard use-cases. The conceptual system design was proposed to fulfill the requirements of the task of labor market analysis. The evolutionary prototypes of the services are presented.

        Speaker: Irina Filozova (JINR MLIT, Dubna State University, Plekhanov Russian University of Economics)
      • 16:20
        The analysis of the educational measurement results, and its providing as “software-as-a-service” solution in eLearning 15m

        In modern eLearning systems, educational measurements are used both to evaluate the students’ achievements, and to control the learning process. However, eLearning systems usually have comparatively trivial embedded features for analyzing measurement results, which are not of considerable interest for sufficient statistical research of the assessment tools quality. To identify the characteristics of assessment materials, such as reliability, homogeneity, discriminatory power, validity, and others, the researcher is forced to get dump from the database of eLearning system. And then they use third-party software to perform required data processing operations and calculations. This makes it difficult to analyze the measurement results during the measuring itself, for example, in adaptive testing. We propose the approach to organizing and performing measurement results analysis by using the software-as-a-service (SaaS) model for cloud computing. The SaaS user is provided with the set of necessary tools for conducting full-fledged statistical analysis in real time. They also get the access to customizable applications for implementing their own measurement procedures (including adaptive ones).
        Keywords: educational measurement, eLearning, SaaS, assessment tools, statistical analysis.

        Speaker: Ms Julia Lavdina (National Research Nuclear University MEPhI (Moscow Engineering Physics Institute))
      • 16:35
        Joint Scientific and Educational projects of JINR and NOSU 15m
        Speaker: Nelli Pukhaeva (JINR / NOSU)
    • 09:00 10:30
      Plenary reports Conference Hall

      Conference Hall

      • 09:00
        From Quantum Speed-up to Supremacy and Advantage 45m Conference Hall

        Conference Hall

        Quantum computing began in the early 1980s when physicist Paul Benioff
        constructed a quantum mechanical model of Turing machine and physicist
        Richard Feynman and mathematician YuriManin discussed the potential
        of quantum computers to simulate phenomena a classical computer could
        not feasibly do.
        In 1994 Peter Shor developed a polynomial quantum algorithm for factoring
        integers with the potential to decrypt RSA-encrypted communications.
        In 1996 Lov Grover developed a quantum algorithm for unstructured search
        in time O(√N) and proved that the time of any classical algorithm solving
        the analogous problem cannot be less than O(N). Grover’s algorithm is
        provable faster than any classical competitor, so it has achieved a quantum
        speedup; this success led to a surge of theoretical and experimental results
        in the new field. However, after 30 years no new quantum algorithms had
        achieved a speedup . . .
        In 2011 physicist John Preskill proposed and discussed the syntagm
        “quantum computational supremacy” – a significantly weaker form of speedup
        – at a Solvay Conference of Physics:
        We therefore hope to hasten the onset of the era of quantum
        supremacy, when we will be able to perform tasks with controlled
        quantum systems going beyond what can be achieved
        with ordinary digital computers.
        A quantum computational supremacy is achieved when a formal computational
        task is performed with an existing quantum device which cannot
        be performed using any known algorithm running on an existing classical
        supercomputer in a reasonable amount of time.
        In recent years, investment in quantum computing research has increased
        in the public and private sectors. After a false start, on 23 October 2019,
        Google AI, in partnership with the NASA claimed to have performed a
        quantum computation that was infeasible on any classical computer.
        The field captures the interest and imagination of the large public and
        media, and not surprisingly, unfounded claims about the power of quantum
        computing and its applications proliferate.
        In this talk we will discuss the merits and limits of pursuing the goal of
        achieving a quantum advantage.

        Speaker: Cristian Calude (University of Auckland)
      • 09:45
        High-performance quantum computing technologies 45m
        Speaker: Alexey Fedorov (Russian Quantum Center)
    • 10:30 10:50
      Coffee 20m
    • 10:50 13:00
      Plenary reports Conference Hall

      Conference Hall

      Conference Hall, 5th floor
      • 10:50
        Clustering in ontology-based exploratory analysis of scientific productivity 40m

        Ontology-based approach in exploratory analysis of textual data can significantly improve the quality of the obtained results. On the other hand, the use of domain knowledge defined in the form of ontologies increases the time needed to prepare a model and makes required calculations more complex. The presentation will discuss selected aspects of cluster analysis performed on documents automatically annotated using ontologies. It seems that methodological aspects of cluster analysis process, especially the way in which distances are determined, should depend on the structure of a given ontology. In the presentation three cases involving the use of ontologies with linear, hierarchical and network structures will be discussed. Also, the problem of clustering with respect to data annotated by two different ontologies will be presented. The issues presented during the presentation will be illustrated by the results of analyses carried out for data in the form of abstracts of scientific articles.

        Speaker: Pawel Lula (Cracow University of Economics, Poland)
      • 11:30
        Серверы Dell EMC PowerEdge ЛОКОМОТИВ ИТ-ИННОВАЦИЙ 30m

        Айтикост и Dell в понедельник . доклад спонсоров

        Speaker: Nikita Stepanov (Dell)
      • 12:00
        Опыт и возможности Softline в обеспечении инфраструктуры для научных исследований 30m
        Speaker: Mr Сергей Монин (Софтлайн)
      • 12:30
        Вычислительные технологии класса энтерпрайз 30m
        Speaker: Valery Yegorshev (NIAGARA COMPUTERS, LLC)
    • 13:00 13:20
      Afternoon coffee 20m
    • 13:20 14:00
      Plenary reports Conference Hall

      Conference Hall

      Conference Hall, 5th floor
      • 13:20
        RDIG-M 40m

        -

        Speaker: Vasiliy Velikhov (Kurchatov Institute National Research Centre)
    • 14:00 16:00
      DataManagement for supercomputers round table 403

      403

    • 14:00 16:00
      RDIG round table Conference Hall

      Conference Hall

    • 16:00 20:00
      Boat and Picnic Party 4h
    • 09:00 10:30
      Plenary reports Conference Hall

      Conference Hall

      • 09:00
        Current status of the MICC: an overview 45m

        -

        Speaker: Tatiana Strizh (JINR)
      • 09:45
        PIK Data Centre status 45m

        In the framework of the PIK nuclear reactor reconstruction project, a PIK Data Centre was commissioned in 2017. While the main purpose of the Centre is storage and processing of PIK experiments data, its capacity is also used by other scientific groups at PNPI and outside for solving problems in different areas of science. PIK Data Centre is an integral part of computing facilities of NRC "Kurchatov Institute" and consists of several types of computing nodes suitable for a wide range of tasks and two independent data storage systems, all of which are interconnected with a fast InfiniBand network. In this talk we will highlight the latest results and challenges after three years of successful operation.

        Speaker: Andrey Kiryanov (PNPI)
    • 10:30 11:00
      Coffee 30m
    • 11:00 12:30
      Plenary reports Conference Hall

      Conference Hall

      • 11:00
        Offline Software and Computing for the SPD experiment 45m

        The SPD (Spin Physics Detector) is a planned spin physics experiment
        in the second interaction point of the NICA collider that is under
        construction at JINR. The main goal of the experiment is the test of
        basic of the QCD via the study of the polarized structure of the
        nucleon and spin-related phenomena in the collision of
        longitudinally and transversely polarized protons and deuterons at
        the center-of-mass energy up to 27 GeV and luminosity up to 1032
        1/(cm
        2 s). The data rate at the maximum design luminosity is expected
        to reach 0.2 Tbit/s. Current approaches to SPD computing and offline
        software will be presented. The plan of the computing and software
        R&D in the scope of the SPD TDR preparation will be discussed.

        Speaker: Alexey Zhemchugov (JINR)
      • 11:45
        IT solutions for JINR tasks on the “Govorun” supercomputer 45m

        The “Govorun” supercomputer is a heterogeneous computing system that contains computing architectures of different types, including graphics accelerators. The given architecture of the supercomputer allows users to choose optimal computing facilities for solving their tasks.
        To enhance the efficiency of solving user tasks, as well as to expand the efficiency of utilizing both the computing resources and data processing and storage resources, a number of special IT solutions were implemented on the “Govorun” supercomputer. A hierarchical hyper-converged data processing and storage system with a software-defined architecture is referred to the first type of IT solutions. The implementation of this system is caused by the fact that modern supercomputers are used not only as traditional computing environments for carrying out massively parallel calculations, but also as systems for Big Data analysis and artificial intelligence tasks that arise in different scientific and applied tasks. According to the speed of accessing data, the system is divided into layers that are available for the user’s choice. Each layer of the developed data storage system can be used both independently and as part of data processing workflows. The second type of IT solutions lies in resource orchestration, which means that computational elements (CPU cores and graphics accelerators) and data storage elements (SSDs) form independent computing and data storage fields. Due to it, the user can allocate for his task the required number and type of compute nodes (including the required number of graphics accelerators), as well as the required volume and type of data storage systems.
        The implementation of the above technologies made it possible to perform a number of complex resource-intensive calculations in the field of lattice quantum chromodynamics to study the properties of hadronic matter at high energy density and baryon charge and in the presence of supramaximal electromagnetic fields, to qualitatively increase the efficiency of modeling the dynamics of collisions of relativistic heavy ions, to speed up the process of event generation and reconstruction for conducting experiments within the NICA megaproject implementation, to carry out computations of the radiation safety of JINR experimental facilities, to significantly accelerate studies in the field of radiation biology and other applied tasks solved at JINR at the level of international scientific cooperation.

        The studies in this direction were supported by the RFBR special grant (“Megascience – NICA”), No. 18-02-40101.

        Speaker: Maxim Zuev (JINR)
    • 12:30 13:30
      Lunch 1h
    • 13:30 15:00
      Data Management, Organization and Access 407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b
      • 13:30
        Russian data lake prototype as an approach towards national federated storage for Megascience. 15m

        A substantial data volume growth will appear with the start of the HL-LHC era. It is not well covered by the current LHC computing model, even taking into account the hardware evolution. The WLCG DOMA project was established to provide data management and storage researches. National data lake r&d's, as a part of the DOMA project, should address the study of possible technology solutions for the organization of intelligent distributed federated storage. This talk will present the current status of the Russian Scientific Data lake prototype and the methodology, which is used for the validation and functional testing of deployed infrastructure.

        Speaker: Andrey Kiryanov (PNPI)
      • 13:45
        LOAD BALANCING STRATEGIES IN GRID-SYSTEMS 15m

        The research of load balancing strategies in Grid systems is carried out. The main classes of load distribution strategies are identified with the aim of possibly increasing the efficiency of distributed systems. A model based on the fractal method for describing the dynamics of the load is considered.

        Speaker: Mr Sergei A. Mamaev (Data center engineer, RU-CENTER company)
      • 14:00
        APPROACH TO REMOTE PARTICIPATION IN THE ITER EXPERIMENTAL PROGRAM. EXPERIENCE FROM MODEL OF RUSSIAN REMOTE PARTICIPATION CENTER 15m

        The model of Russian Remote Participation Center (RPC) was created under the contract between Russian Federation Domestic Agency (RF DA) and ROSATOM as the prototype of full-scale Remote Participation Center for ITER experiments and for coordination activities in the field of Russian thermonuclear research. This prototype was used for investigation of the following technical and scientific tasks:
        1. Investigation of the high-speed data transfer via existing public networks (reliability, speed accuracy, latency)
        2. Test of ITER remote participation interfaces (Unified Data Access, Data Visualization and Analysis tool, etc.)
        3. Local Large-capacity data storage system (storage capacity more than 10 TB and disk I/O speed 300 MB/s)
        4. Remote monitoring of Russian plasma diagnostics and technical systems
        5. Security (Access to ITER S3 zone IT infrastructure (S3 – XPOZ (External to Plant Operation Zone)) in accordance with the requirements of cyber security and IEC 62645 standard.
        6. Participation in ITER main control room activities (remote copy of central screens and diagnostics HMI)
        7. Access to experimental data
        8. Local data processing with integration of existing data processing software (visualization, analysis, etc.)
        9. Scientific data analysis remotely by ITER remote participation interfaces

        In the presented report, the data transfer processes (latency, speed, stability, single and multi-stream etc.) and security issues within 2 separate L3 connection to IO over public internet exchange point and GIANT were investigated. In addition, we have tested various ITER tools for direct remote participations, such as screen sharing, data browsing etc. at the distance from RF RPC to ITER IO (about 3000 kilometers).
        Experiments have shown that the most stable and flexible option for live data demonstration and creation of a control room effect is the EPICS gateway. Together with the ITER dashboard, the usage of these tools makes it possible to emulate almost any functional part of the MCR at the side of a remote participant. This approach allows us to create our own mimics and customize the CSS studio HMIs for ourselves. Today using these tools, we can integrate various systems remotely without any major restrictions.
        For data mirroring tasks UDA server replication is an option. It may improve performance for usage of the data browsing tools and some other tasks with archive data. To obtain the best performance it is very important to find multithreading (multi streams) data replication solution between UDA servers.
        Network setup connection strategy still under development with IO now.
        Work done under contract Н.4а.241.19.18.1027 with ROSATOM and Task Agreement C45TD15FR with ITER Organization

        Speaker: Mr Oleg Semenov (Project Center ITER)
      • 14:15
        THE ALGORITHM FOR SOLVING THE PROBLEM OF SYNTHESIS OF THE OPTIMAL LOGICAL STRUCTURE OF DISTRIBUTED DATA IN ARCHITECTURE OF GRID SERVICE 15m

        Abstract. The questions of constructing optimal logical structure of a distributed database (DDB) are considered. Solving these issues will make it possible to increase the speed of processing requests in DDB in comparison with a traditional database. In particular, such tasks arise for the organization of systems for processing huge amounts of information from the Large Hadron Collider  the charged particle accelerator. In these systems various DDBs are used to store information about: the system of triggers of data collection from physical experimental installations, the geometry and the operating conditions of the detector while collecting experimental data.
        It is proposed to distinguish two interrelated stages in the synthesis algorithm. The first step is to solve the problem of distribution of database clusters between the server and clients, followed by the solution of the problem of optimal distribution of data groups of each node by types of logical records. At the second stage the problem of database localization on the nodes of the computer network is solved, in addition to the results of the first stage, the characteristics of the DDB are taken into account. Optimal logical structure of DDB will ensure the efficiency of the information system on computational resources. As a result of its solution, the local network of the DDB is decomposed into a number of clusters that have minimal information connectivity with each other. Solving the problem of synthesis of the optimal logical structure are also of great practical importance for the automated design of logical structures, for the automated formation of query specifications and adjustments of the DDB.

        Speaker: Elena Nurmatova (Russia)
      • 14:30
        The development of a new conditions database prototype for ATLAS RUN3 within the CREST project 15m

        The CREST project for a new conditions database prototype for Run3 (intended to be used for production in Run 4) is focused on improvement of Athena based access, metadata management and, in particular, global tag management. The project addresses evolution of the data storage design and conditions data access optimization, enhancing the caching capabilities of the system in the context of physics data processing inside the ATLAS distributed computing infrastructure. The CREST architecture is designed as a client server model, with the storage backend implemented in a relational database. The data access was realized with a pure REST API with JSON support. The new C++ client access library provides an HTTP query interface. A tool to convert the existing conditions data (stored in Oracle and accessible via the COOL API) into the new CREST system using a custom JSON format has also been implemented. Preliminary data migration has been done to allow testing data retrieval from Athena and the process of validation of the server and client functionalities is in progress.

        Speaker: Mikhail Mineev (JINR)
      • 14:45
        Development of the Condition Database for the experiments of the NICA complex 15m

        Processing and analyzing of experimental and simulated data are an integral part of all modern high-energy physics experiments. These tasks are of particular importance in the experiments of the NICA project at the Joint Institute for Nuclear Research (JINR) due to the high interaction rate and particle multiplicity of ion collision events, therefore the task of automating the considered processes for the NICA complex has particular relevance. The report describes a new information system based on the Condition Database, as well as related services to automate storing and processing of information on the experiments. The Condition Database for the NICA experiments is aimed at storing, searching and using various parameters and operation modes of experiment systems. The implemented system provides necessary information for event data processing and physics analysis tasks, and organizes a transparent, unified access and management of the required parameter data throughout the life cycle of the scientific research. The scheme and purposes of the Condition Database, its attributes, key aspects of the development are shown. The integration of the Condition Information System with experiment software systems is also presented.

        Speaker: Konstantin Gertsenberger (JINR)
    • 13:30 15:00
      Distributed computing systems 310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612
      • 13:30
        Some Aspects of the Workflow Scheduling in the Computing Continuum Systems 15m

        Contemporary computing systems are commonly characterized in terms of data-intensive workflows, that are managed by utilizing large number of heterogeneous computing and storage elements interconnected through complex communication topologies. As the scale of the system grows and workloads become more heterogeneous in both inner structure and the arrival patterns, scheduling problem becomes exponentially harder, requiring problem-specifc heuristics. Despite several decades of the active research on it, one issue that still requires effort is to enable effcient workflows scheduling in such complex environments, while preserving robustness of the results. Moreover, recent research trend coined under term "computing continuum" prescribes convergence of the multi-scale comuputational systems with complex spatio-temporal dynamics and diverse sets of the management policies. Since emergence of the concept in 2020, there is a lack of a reproducible model of the computing continuum, especially for better understanding scheduling heuristics, as real systems do not preserve this quality and hinder the comparative performance analysis of the novel scheduling approaches. In this talk we will discuss how to approach this problem from simulation perspective, discuss important algorithmic and architectural aspects.

        This work has been supported by ADAPT Project funded by the Austrian Research Promotion Agency (FFG) under grant agreement No. 881703 and ASPIDE Project funded by the European Union's Horizon 2020 Programme (H2020) under grant agreement No. 801091.

        Speaker: Mr Vladislav Kashansky (University of Klagenfurt and South Ural State University)
      • 13:45
        The JINR distributed information and computing environment: participants, features and challenges 15m

        The JINR distributed information and computing environment (DICE) was created to join resources for solving common scientific tasks as well as to distribute a peak loads across resources of partner organizations from JINR Member States. To monitor hardware resources and services of growing DICE infrastructure a system based on Prometheus and Thanos was designed and deployed. Collected metrics including the JINR DICE participants geographical locations are visualized with help of Grafana. Software distribution is done with help of CERN Virtual Machines File System. All these topics as well as challenges and possible overcomes are covered in detail.

        Speaker: Nikolay Kutovskiy (JINR)
      • 14:00
        Usage of JINR SSO authentication and authorization system with distributed data processing services. 15m

        The amount of data produced by a scientific community already measured tens and hundreds of petadytes and will significantly grow in future. Distributed computing systems have proven to be an effective solution for handling such data streams. Technologies and a set of products that allow deployment and use of important components of a distributed system already existed, quite stable and supported. Meanwhile, building a fully functional system, even with existing components, is not trivial.
        The Unified Resource Management System is under development in the Laboratory of information technologies of JINR. This system uses technologies and solutions which were developed during the evolution of middleware platforms for distributed processing of data from LHC experiments, but oriented to JINR based experiments. The basis for the integration of components to the system is the usage of common authentication and authorization service.This talk will present the experience of integrating the CRIC information system and the Airflow based workflow control system with the JINR SSO authentication service.

        Speaker: Andrey Iachmenev
      • 14:15
        Modeling the process of executing an evolutionary algorithm on the desktopgrid 15m

        The topic of the presented report is various approaches to modeling the process of solving optimization problems using the desktopgrid [1]. The report summarizes the practical experience of performing computations on local infrastructures and on voluntary computing projects. The creation of preliminary models of the computational process will allow to avoid many systemic complexities in the process of performing computations in practice.
        Systemic effects that affect computational performance will be considered [2]. The report will propose ways to quantify the efficiency and productivity of the evolutionary algorithm on the desktopgrid [3]. Approaches to the compilation of mathematical and simulation models and methods of calculating metric characteristics within the framework of the proposed approaches will be considered.

        The research was supported by the grant of the Russian Foundation for Basic Research according to the project №19-07-00911.

        1. Nikolay P. Khrapov, Valery V. Rozen, Artem I. Samtsevich, Mikhail A. Posypkin, Vladimir A. Sukhomlin, Artem R. Oganov. Using virtualization to protect the proprietary material science applications in volunteer computing. Open Eng. 2018, v.8, pp. 57-60.

        2. Khrapov N.P. Analysis of the performance reasons for adapting the evolutionary algorithm to voluntary computing systems. Proc of the International Congress on Modern Problems of Computer and Information Sciences, 2019, pp. 21-26 (in Russian).

        3. Khrapov N.P. Metrics of efficiency and productivity when using the evolutionary algorithm on desktopgrid. Proc. ISP RAS, vol. 32, issue 4, 2020. pp. 133–140 (in Russian). DOI: 10.15514/ISPRAS–2020–32(4)–9

        Speaker: Nikolay Khrapov (Pavlovich)
      • 14:30
        IT for air quality management - mathematical modeling verified by special sampling and nuclear analytical methods and Air Pollution Management System (AQMS) 15m

        AQMS - in this case, the broad name includes in particular measurement and mathematical modeling of air pollution, geoinformation technologies for the analysis of their results and preparation and implementation of modeling on parallel supercomputer clusters. In the past few years, my team and I have been researching and refining mathematical models, expanding the amount of processed input data using the most powerful supercomputers available, verifying model calculations using special monitoring using nuclear analytical methods - NAA of bryophytes, unmanned aerial vehicle and robotic automatic sampler placed on a former mining tower filters in the area of ​​interest. We created a model of air pollution relations in a large area between the Czech Republic, Poland and Slovakia, which we called Tritia. We have created a Air Quality Management System for this area, which uses the results of retrospective and perspective modeling over a large period of time (from 2006 to 2040). Researchers from the abovementioned countries and the JINR participated in all these works.
        We would like to continue this cooperation and further develop our research in all these areas. We would like to transfer our results to the JINR environment. That is neutron activation analysis (NAA) of filters and bryophytes. Furthermore, we would like to focus on the transfer of mathematical models and IS and their new variants (improvements and refinements) to the platforms used in the JINR.

        Speaker: Petr Jancik (JINR; VSB - Technical University of Ostrava)
      • 14:45
        The graph diameter of a distributed system with a given dominant set 15m

        In this work consider a distributed computing system in which the control functions are dispersed in several dominant nodes that are directly connected to all the others. This configuration reduces the vulnerability of the entire network, since the failure of a single control element immediately disrupts its operation. On the other hand, the large length of the maximum shortest chain (diameter) increases the data transfer time, which is bad for the functioning of the entire system.
        The connection of the maximum shortest chain of a distributed network graph with the size of a certain dominant set is investigated. The structure of a graph with a maximum diameter on the set of all graphs with a given dominant set is presented, a diametric chain is constructed, and the value of the extreme diameter is estimated.
        Based on this construction, it is possible to generate various network graphs with a given dominant set and a diameter that takes certain values. To do this, we propose a number of operations that change the set of edges of the original graph. As a result, this method provides a way to construct graph structures with given metric characteristics.

        Speaker: Ilya Kurochkin (IITP RAS)
    • 13:30 15:00
      HPC 403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f
      • 13:30
        Population annealing method and hybrid supercomputer architecture 15m

        A population annealing method is a promising approach for large-scale simulations because it is potentially scalable on any parallel architecture. We report an implementation of the algorithm on a hybrid program architecture combining CUDA and MPI [1]. The problem is to keep all general-purpose graphics processing unit devices as busy as possible by efficiently redistributing replicas. We provide testing details on hardware-based Intel Skylake/Nvidia V100, running more than two million replicas of the Ising model samples in parallel. As the complexity of the simulated system increases, the acceleration grows toward perfect scalability.

        This work was initiated under Grant No. 14-21-00158 and finished under Grant No. 19-11-00286 from the Russian Science Foundation. We also acknowledge the support within the scientific program of the Landau Institute for Theoretical Physics. We used the Manticore cluster of ANR laboratory at the Science Center in Chernogolovka for the small-scale testing and the supercomputing facility of the National Research University Higher School of Economics for the large-scale testing [2].

        [1] A. Russkov, R. Chulkevich, L. Shchur, Computer Physics Communications, 261, 107786 (2021)
        [2] P. S. Kostenetskiy, R. A. Chulkevich, and V. I. Kozyrev, J. Phys. Conf. Ser. 1740, 012050 (2021)

        Speaker: Lev Shchur (leading reseacher, Landau Institute for Theoretical Physics)
      • 13:45
        Multi-GPU training and parallel CPU computing for the machine learning experiments using Ariadne library 15m

        Modern machine learning (ML) tasks and neural network (NN) architectures require huge amounts of GPU computational facilities and demand high CPU parallelization for data preprocessing. At the same time, the Ariadne library, which aims to solve complex high-energy physics tracking tasks with the help of deep neural networks, lacks multi-GPU training and efficient parallel data preprocessing on the CPU.
        In our work, we present our approach for the Multi-GPU training in the Ariadne library. We will present efficient data-caching, parallel CPU data preprocessing, generic ML experiment setup for prototyping, training, and inference deep neural network models. Results in terms of speed-up and performance for the existing neural network approaches are presented with the help of GOVORUN computing resources.

        Speaker: Egor Shchavelev (Saint Petersburg State University)
      • 14:00
        Implementing the Graph Model of the Spread of a Pandemic on GPUs 15m

        Modeling the spread of viruses is an urgent task in modern conditions. In the created model, contacts between people are represented in the form of the Watz and Strogatz graph. We studied graphs with tens of thousands of vertices with a simulation period of six months. The paper proposes methods for accelerating computations on graph models using graphics processors. In the considered problem, there were two resource-intensive computational tasks: generating an adjacency matrix of a graph that simulates the presence of contacts between people and traversing this graph in order to simulate infection. The calculations were carried out in sequential mode and with acceleration on GPUs. The modeling system software is implemented using the Cuda, CuPy, PyTorch libraries. The calculations were carried out using the Tesla T4 graphics accelerator. Compared to computations without using graphics accelerators, their application gave an 8-fold increase in speed. The reported study was funded by RFBR and CNPq, FASIE, DBT, DST, MOST, NSFC, SAMRC according to the research project No. 20-51-80002.

        Speaker: Vladimir Sudakov (Plekhanov Russian University of Economics, Keldysh Institute of Applied Mathematics (Russian Academy of Sciences))
      • 14:15
        Developing a Toolkit for Task Characteristics Prediction Based on Analysis of Queue’s History of a Supercomputer 15m

        Empirical studies have repeatedly shown that in High-Performance Computing (HPC) systems, the user’s resource estimation lacks accuracy [1]. Therefore, resource underestimation may remove the job at any step of computing, and subsequently allocated resources will be wasted. Moreover, resource overestimation also will waste resources. The SLURM, a famous job scheduler, has a mechanism to predict only the starting time of a job. However, this mechanism is very primitive and more often the time is overestimated. In addition, there is a software system for modeling the activity of computing cluster users based on the SLURM, which uses the collection of statistics to simulate the load on a model of computing cluster under the control of SLURM. This software lists several metrics used by the system administrators of clusters. This approach has been tested on data from computing clusters of the Faculty of Computational Mathematics and Cybernetics, Moscow State University, and NIKIET JSC. However, this solution is also not suitable because it does not allow for analysis and prediction. There are other systems for analyzing the efficiency of using a cluster. Nevertheless, trying to connect such systems to the SLURM wastes a large number of resources, while our proposed method only requires the development of a component where the analysis system can easily get embedded. In this work, to effectively utilize the overall HPC system, we proposed a new approach to predict the required resources such as the number of required CPUs, time slots, etc. for a newly submitted job. The study focused on predictive analytics, including regression and classification tasks. The possibility of designing and using a plugin to apply our proposed method in real applications used by the system user was also studied.
        A supervised machine learning (ML) system comprising several ML models was trained based on the collection of statistical data collected from the reference queue systems. Our dataset includes per-job and per-user features. To make our system applicable on a job scheduler, particularly SLURM, a dynamically connected SLURM SPANK was designed. This plugin has the ability to collect statistics, analyze them and based on the analysis create a model. Using this model, the prediction could be done. Plugins may not only complement but also modify the behavior of SLURM to optimize the system’s performance. SLURM has several utilities for its users. In this work, we are mainly interested in 2 utilities: srun - to start a job and sbatch - for placing jobs in the queue. More precisely, sending a batchscript (instructions to SLURM to perform the job) to the SLURM. The task is done through the design, develop and test of the functionality of the component connected to the SLURM queuing system. The component collects statistical data and does analysis on the flow of computational tasks. The proposed plugin, MLSP (Machine Learning SLURM Plugin), takes control when executing the srun and sbatch commands. The code splits into 2 large parts: main - working with SLURM and auxiliary - working with the server.
        Our work has led us to conclude that adding new features to the dataset improves prediction accuracy. An innovative solution for the resource allocation problem was found. The possibility of writing a plugin to apply our machine learning system in practical applications was studied. It was found that designing a plugin allows the practical use of machine learning algorithms in decision making. However, it is required to improve the performance of this component. In future work, we will use this component to evaluate our algorithms on a real cluster to find the best method to predict required resources.
        1- Tsafrir, Dan, Yoav Etsion, and Dror G. Feitelson. "Backfilling using runtime predictions rather than user estimates." School of Computer Science and Engineering, Hebrew University of Jerusalem, Tech. Rep. TR 5 (2005): 2003.

        Speaker: Mr Mahdi Rezaei (Moscow Institute of Physics and Technology, Moscow, Russia)
      • 14:30
        Research of improving the performance of explicit numerical methods on the x86 and ARM CPU 15m

        Explicit numerical methods are used to solve and simulate a wide range of mathematical problems whose origins can be mathematical models of physical conditions. However, simulations with large model spaces can require a tremendous amount of floating point calculations and run times of several months or more are possible even on large HPC systems.
        The vast majority of HPC systems in the field today are powered by x86 and ARM CPUs [1]. Our aim is to investigate methods of increasing computational speed for simulation on CPUs and also to compare the performance and energy efficiency on x86 and ARM CPUs. High-order finite difference time domain (FDTD) method to solve the 3D acoustic equation was used in our work.
        For HPC, in conjunction with parallel computing, we used CPU capabilities like SIMD-computing (AVX on x86 and NEON on ARM) [2] and hierarchical structure of the memory of the CPU caches to optimize data locality. For data locality was used the method of changing order of traversal on the iteration space – loop tiling [3]. Our work considers a number of optimization tiling algorithms and test calculations for x86 and ARM architectures. In particular, we considered recursive and non-recursive cube-tiling [4] and ZCube data locality optimization.
        We have found that ZCube increases the performance of SIMD-computations on ARM CPU [5] and speeds up computation with tiling on both CPU architectures. Also, as expected, we found that non-recursive tiling has better performance for the CPU architectures than recursive tiling due to less CPU cache misses. And finally, we found that ARM CPU have 12 times more performance/energy efficiency factor than x86 CPU.
        In this respect, extending our experiments on ARM-cluster computing with increasing performance of non-recursive and recursive tiling would be of interest.
        References

        1. http://www.top500.org/
        2. S. M. et. al., “Vector instructions to enable efficient
          synchronization and parallel reduction operations,” U.S. Patent
          WO2009120981A2, Oct. 2009.
        3. J. Xue, “On tiling as a loop transformation,”Parallel Processing
          Letters, vol. 07,no. 04, pp. 409–424, 1997.
        4. V. Furgailo, A. Ivanov, and N. Khokhlov, “Research of techniques to
          improve the performance of explicit numerical methods on the cpu,”
          pp. 79–85, 09 2019.
        5. J. Bakos,Embedded Systems: ARM Programming and Optimization.
          Elsevier Science, 2015.
        Speaker: Vladislav Furgailo
      • 14:45
        Analysis of the effectiveness of various methods for parallelizing data processing implemented in the ROOT package. 15m

        The ROOT software package has a central role in high energy analytics and is being upgraded in several ways to improve processing performance. In this paper, we will consider several tools implemented in this framework for calculations on modern heterogeneous computing architectures.

        PROOF (Parallel ROOT Facility – an extension of ROOT system) uses the natural parallelism of data structures located in files of a special format, providing direct access to any particular value. PROOF [1,2] divides common work into small fragments - packets. We have investigated how the processing speed depends on the minimum or maximum packet size in seconds or events, the size of the first packet used for calibration. Our calculations also showed that when processing data using PROOF, it is desirable to use the highest possible structuring of the primary data.

        Using the heterogeneity of modern computing architectures, ROOT can be used in common with OpenCL technology. Unlike CUDA, the use of which is limited only by graphics processing units, OpenCL technology is perfectly adapted to various families of microprocessors, so a program developed for one type of computing architecture can be easily transferred to another. The expediency of performing calculations on a GPU is considered, depending on the type of data processing algorithms.

        Implicit multithreading [3], implemented in ROOT since version 6.06, is based on one of the key innovations of the framework - the columnar data format. Data components (variables, structures, or objects) are converted into independent buffers, which are periodically compressed and written to memory. Implicit multithreading parallelizes loops over buffers during transformation and compression stages.

        When processing large amounts of data, read and write speed can be critical. A new function for asynchronous file merging, implemented in the TBufferMerger class [4], allows to write data in parallel from multiple streams to a single output file. Our calculations show good scalability of macro execution time on the number of processor cores used.
        References
        1. Brun R. et al. Parallel interactive data analysis with PROOF //Nuclear Instruments and Methods in Physics Research. A559, pp 13-16, 2006.
        2. Solovjeva T.M., Soloviev A.G. Comparative study of the effectiveness of PROOF with other parallelization methods implemented in the ROOT software package //Computer Physics Communications,v.233, p 41-43, 2018.
        3. Piparo D. et al. Expressing Parallelism with ROOT //Journal of Physics: Conf. Series 898 (2017) 072022.
        4. Amadio G., Canal F., Guiraud E. and Piparo D. Writing ROOT Data in Parallel with TBufferMerger // EPJ Web of Conferences 214, 05037 (2019).

        Speaker: Tatyana Solovjeva (Jinr)
    • 13:30 15:00
      Research infrastructure Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095
      • 13:30
        Система массовой памяти МИВК ОИЯИ. Статус, перспективы. 15m

        Развитие экспериментов в различных областях приводит к увеличению объёма хранения и интенсивности обработки данных. Этот факт приводит к повышению количественных и качественных требований к системам хранения данных. Рассматриваются текущий статус, результаты проведённой в последнее время модернизации и перспективы дальнейшего развития работизированного хранилища данных Многофункционального информационно–вычислительного комплекса (МИВК) Объединенного института ядерных исследований (ОИЯИ). В докладе обсуждаются средства оценки качества работы системы, приводится статистика обращений. Обсуждаются примеры расширения использования хранилища за рамками виртуальных организаций, включённых в WLCG.

        Speaker: Vladimir Trofimov (JINR)
      • 13:45
        Usage of time series databases in the Grafana platform for the NETIS service 15m

        NetIs is a service used to monitor the Data Acquisition network of the ATLAS experiment. The first version was developed at CERN in 2010. The network group would like to replace NetIs because it is difficult to maintain and store data in a Round Robin Database (RRD), resulting in a loss of granularity over time that makes the tool unsuitable for retrieving accurate values from the past. The graphs produced by NetIs are generated by the backend server and they are quite static, though the GUI is familiar to many users. The main idea was to evaluate other time series databases supported by the Open Source tool Grafana so that data can be shown in a dynamic way. The Persistent Back-End for the ATLAS Information System (P-BEAST), developed in ATLAS for permanent storage of operational data, was already integrated with Grafana and successfully collecting network monitoring statistics.
        Grafana, despite being very popular visualization web application, does not support some GUI elements that are used in NetIs such as a tree or position of drop-down. Javascript code integrated with Grafana was used to overcome these limitations.

        Speaker: Evgeny Alexandrov (JINR)
      • 14:00
        Monitoring System for the Russian Scientific Data Lake Prototype 15m

        The Russian Scientific Data Lake is a part of Data Lake R&D conducted by the DOMA project. It aims to mitigate the present LHC computing model drawbacks to cope with an unprecedented scientific data volume at the multi-exabyte scale that will be delivered by experiments in the High Luminosity phase of the LHC. The prototype of the Russian Scientific Data Lake is being implemented and it tests different configurations of data caching and buffering mechanisms using real ATLAS and ALICE experiments payloads. In order to compare the efficiency of the resource usage between different configurations and control the state of the deployed infrastructure a unified monitoring system was developed. It aggregates data from various sources into a single ElasticSearch storage. On top of it, a set of dashboards at Kibana and a special web-application based on Django framework were developed for monitoring test jobs and software components of the Russian Scientific Data Lake infrastructure. In this work we present the architecture, components and features of the unified monitoring system.

        Speaker: Mr Aleksandr Alekseev (Ivannikov Institute for System Programming of the RAS)
      • 14:15
        JINR CMS Tier-1,LCG2 accounting system 15m

        The problem of evaluating the efficiency of the JINR MLIT grid sites has always been topical. At the beginning of 2021, a new accounting system was created, it managed to fully cover the functionality of the previous system and further expand it. The report will provide detailed information on the implemented accounting system.

        Проблема оценки эффективности работы грид сайтов ЛИТ ОИЯИ
        всегда была актуальной задачей. В начале 2021 года
        была создана новая система аккаунтинга, которая смогла полностью покрыть
        функционал
        предыдущей и дополнительно расширить его. В докладе будет представлена
        подробная информация по реализованной системе аккуантинга.

        Speaker: Ivan Kashunin (JINR)
      • 14:30
        DEVELOPMENT OF EFFECTIVE ACCESS TO THE DISTRIBUTED SCIENTIFIC AND EDUCATIONAL E-INFRASTRUCTURE 15m

        The article describes approaches to the modernization of a distributed electronic infrastructure that combines various types of resources aimed at supporting the research and educational activities in Moldova. The development trends of computer infrastructures and technologies aimed at creating conditions for solving complex scientific problems with high requirements for computing resources are analyzed. Expanding the possibilities of external channels for interaction with the pan-European academic network GEANT, improving regional connections and Internet access are the main directions of the development of the external connectivity for the national R&E electronic infrastructure RENAM. In this direction, a significant role belongs to the implementation of the EU funded EaPConnect project focused on creation of new Cross-Border Fiber (CBF) channels for connecting PoP RENAM (Chisinau) - PoPs of the Ukrainian NREN URAN in Odessa and Kiev - PoP GEANT in Poznan, Poland. At the same time, it is of special interest creating opportunities for storing and accessing growing volumes of research data, including in Moldova. The relatively new European Open Science Cloud (EOSC) initiative, which aims to accumulate various scientific information in the cloud for open access, has a further significant impact on the intensification of the use of distributed computing resources. An initiative aimed at creating open repositories of research data to support open science and the development of technologies for support of FAIR (Findable, Accessible, Interoperable and Reusable) data principles implementation based on the widespread of open research data repositories. The trends in the development of tools for automating the configuration and administration of complex cloud infrastructures for hosting data storing and archiving platforms are described. Identified problems limiting the scalability of the existing cloud infrastructure. Solutions are proposed to overcome the existing limitations by using new tools for configuring and administering cloud infrastructure. Research work now focused on deploying new types of cloud infrastructure that will benefit end users by combining the computational resources of multiprocessor clusters with efficient application platforms, user interfaces, and infrastructure management tools. For example, RENAM provides a service for scientific and educational organizations to support video conferencing based on the BigBlueButton (BBB) platform and its integration with the Moodle Learning Management System. To implement effective access to the distance learning systems various options of ready to use configurations of the BBB platform are offered that based on using resources of the RENAM infrastructure and servers’ resources of connected organizations.
        This work was supported by the European Commission, the EaPConnect project (grant contract no. 2015 / 356-353 / 11.06.2015), project H2020 NI4OS-Europe (grant no. 857645) and the National Agency for Science and Development (grant no. 20.80009.5007.22).

        Speaker: Grigore Secrieru (Vasile)
      • 14:45
        Data Center Simulation for the BM@N experiment of the NICA Project 15m

        One of the uppermost tasks in creating a computing system of the NICA complex is to model centers of storing and processing data that come from experimental setups of the complex, in particular, the BM@N detector, or are generated using special software for checking of the developed data processing algorithms and for comparison with the expected physical result.

        After reviewing the existing software tools for data center simulation, a new approach was chosen to solve the problem. The approach is based on the representation of information processes as byte streams and the use of probability distributions of significant data acquisition processes; in particular, the probabilities of losses of incoming information for different configurations of the data center equipment are to be defined.

        The current status of the work and the first results of modeling centers for processing and storing data of the BM@N experiment of the NICA complex for the next run are presented.

        Speaker: Daria Priakhina (ЛИТ)
    • 15:00 15:30
      Coffee 30m
    • 15:30 16:30
      Data Management, Organization and Access 407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b
      • 15:30
        Development of the Event Metadata System for the NICA experiments 15m

        Particle collision experiments are known to generate substantial amount of data that must be stored and, later, analyzed. Typically, only a small subset of all the collected events is relevant when performing a particular physics analysis task. Although it is possible to obtain the required subset of records directly, by iterating through the whole volume of the collected data, the process is very time and resource consuming. Instead, a more convenient approach is to have an event metadata or indexing system that stores summary properties of all events and allows for fast searching and data retrieval based on various criteria. Such a system, called Event Metadata System, is being developed for the fixed-target and collider experiments of the NICA project. The design of the system, its components, user interfaces and REST API service, its integration with existing experiment systems and software, as well as associated challenges are presented.

        Speaker: Peter Klimai (INR RAS)
      • 15:45
        Tape libraries as a part of JINR MICC mass storage system. 15m

        The Multifunctional Information and Computing Complex in the Laboratory of Information Technologies of the Joint Institute for Nuclear Research is a multicomponent hardware and software complex, which ensures the fulfillment of a wide range of tasks related to the processing, analysis and storage of data in research conducted at the world level at JINR and in the world centers collaborating with it, both within the framework of the Institute's research program, in particular the NICA megaproject, and within the framework of priority scientific tasks carried out in cooperation with the world's leading scientific and research centers (CERN & etc.). An important aspect to ensure long-term, energy-efficient and reliable data storage is the use of robotic tape libraries in the storage system for experiments. The report highlights current status and stages of development of MICC robotic tape libraries IBM TS3500 and IBM TS4500.

        Speaker: Алексей Голунов (ЛИТ ОИЯИ)
      • 16:00
        Complete decentralization of distributed data storages based on blockchain technology 15m

        The report presents a solution for completely decentralized data management systems in geographically distributed environments with administratively unrelated or loosely related user groups and in conditions of partial or complete lack of trust between them. The solution is based on the integration of blockchain technology, smart contracts and provenance metadata driven data management. Architecture, operation principles and algorithms developed provides fault-tolerant, safe and reliable management of provenance metadata, control of operations with data files, as well as resource access management in collaborative distributed computing systems. The latter refer to distributed systems formed by combining into a single pool of computer resources of various organizations (institutions) to work together in the framework of some project.

        Speaker: Andrey Demichev (SINP MSU)
      • 16:15
        Data Knowledge Base current status and operation 15m

        The Data Knowledge Base (DKB) project is aimed at knowledge acquisition and metadata integration, providing fast response for a variety of complicated queries, such as summary reports and monitoring tasks (aggregation queries) and multi-system join queries, which are not easy to implement in a timely manner and, obviously, are less efficient than a query to a single system with integrated and pre-processed information would be. In this work the current status of the project as well as its integration with the ATLAS Workflow Management and future perspectives are shown.

        Speaker: Viktor Kotliar (IHEP)
    • 15:30 16:45
      Distributed computing systems 310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612
      • 15:30
        Jira plugin for ALICE instance 15m

        As a bug tracking and project management system, the ALICE experiment uses Jira software, which provides a wide range of configuration options. Jira also allows to significantly expand the basic functionality with custom plugins. In this work, the LinkedIssuesHasStatus plugin for the JIRA service of the ALICE experiment is developed and implemented. Plugin returns tickets that have linked issues which are in the specified status and that are linked with the specified linking type.

        Speaker: Andrey Kondratyev (JINR)
      • 15:45
        Remote Procedure Call protocol with support for higher-order functions 15m

        We are developing a tool for for calling functions between different environments. And by "different environment" we mean both, different programming languages and different machines. This tool is a remote procedure call protocol(and it's implementation), that is optimized for simplicity and can support higher-order functions. In our implementation, functions are never serialized and are actually executed in the same environment they were created, which allows for lambdas with captures and side-effects like system calls. To support as many languages as possible with as little effort as possible we implemented core library in C++ and kotlin-jvm. We'll discuss internal architecture of the project.

        Speaker: Fedor Bukreev (mipt-npm)
      • 16:00
        Применение мультиагентных систем при обработке видеоданных 15m

        В последние годы все большее распространение получают интеллектуальные системы видеообзора: охранные системы, системы анализа дорожной обстановки, системы выявления девиантного поведения, непрерывно растет число видеокамер, увеличивается разрешение получаемых изображений, усложняются алгоритмы обработки. Все это приводит к непрерывному увеличению генерируемой информации, и соответствующее увеличение требуемых вычислительной производительности оборудования, предназначенного для обработки получаемых данных, пропускной способности системы передачи данных и отказоустойчивости. Поэтому создаваемые системы обработки данных должны быть отказоустойчивы, легко масштабируемы и эффективно использовать вычислительные ресурсы.

        Одной из широко распространенных задач обработки данных является задача обработки видеопотока с целью выделения и распознавания (идентификации, классификации) объектов, представляющих интерес. В ходе работы над созданием системы для решения этой задачи в НИИ МВС ЮФУ был созданы методы и алгоритмы высокоскоростного обнаружения объектов в видеопотоке, однако в ходе исследований выяснилось, что современные вычислительные ресурсы не могут обеспечить обработку изображения высокого разрешения (более 10 мегапиксел) в реальном времени (не менее 25 герц), имеют низкую отказоустойчивость и негарантированную производительность в сложных условиях. Поэтому решено было для решения этой задачи создать систему обработки данных, обладающую указанными выше параметрами (отказоустойчивость, легко масштабируемость, возможность эффективного использования вычислительных ресурсов).

        В работах [1, 2] показано, что при решении распределенных задач использование мультагентных технологий позволяет не только повысить эффективность использования ресурсов и обеспечить масштабирумость разрабатываемой системы, но и обеспечить ее отказоустойчивость. Кроме того, при наличии достаточных вычислительных ресурсов может быть обеспечено заданное время обработки каждого кадра. При этом система не

        теряет своих качеств при географической распределенности компонентов и непостоянстве качества связи.

        В ряде проектов научным коллективом НИИ МВС ЮФУ наработан опыт создания распределенных отказоустойчивых систем на основе мультиагентных технологий и этот опыт может быть перенесен на обозначенную выше проблему обработки видеоданных из множества источников.

        [1] Каляев И. А., Мельник Э. В. Децентрализованные системы компьютерного управления. – 2011.

        [2] Каляев И. А., Каляев А. И., Коровин Я. С. Алгоритм мультиагентного диспетчирования ресурсов в гетерогенной облачной среде //Вычислительные технологии. – 2016. – Т. 21. – №. 5.

        Speaker: Mr Анатолий Каляев (Научно-исследовательского института многопроцессорных вычислительных систем имени академика А.В. Каляева Южного федерального университета)
      • 16:15
        Распределенные отказоустойчивые вычисления с SBN-Python на реальном кейсе компании 15m

        Распределённые вычисления сегодня достаточно востребованы в задачах пакетной обработки данных, но текущие решения, которые позволяют в Python их использовать, либо слишком узкоспециализированные, либо не дают полной отказоустойчивости.

        В рамках выпускной квалификационной работы был разработан высокоуровневый интерфейс на Python (далее SBN-Python) к новому С++ фреймворку Subordination, в котором последняя проблема была решена. Для достижения низкоуровневой совместимости и адаптации всех сценариев функционирования интерфейс был выполнен, как расширение интерпретатора Python.

        Целью данной работы было проверить возможность применения нового интерфейса на реальном кейсе компании ООО «Газпромнефть-ЦР», попутно продемонстрировав принципы его использования.

        Для достижения этой цели было разобрано текущее решение, продумана и реализована новая архитектура с использованием SBN-Python, и в конечном счёте развёрнуто получившиеся решение на мощностях компании.

        По итогу работы вышло, что использование SBN-Python на реальном кейсе всё также даёт рост производительности с увеличением количества узлов в кластере, возможность обработки различных сценариев сбоя узлов за приемлемое время, а также некоторые архитектурные преимущества при организации вычислений.

        В дальнейшем планируется расширить границы применения нового интерфейса, реализовав на его базе возможность построения распределенных веб-сервисов.

        Speaker: Mr Дмитрий Терещенко
      • 16:30
        Развитие и совершенствование способов оценки качества технических систем в процессе эксплуатации 15m

        Предлагаются рациональные инструменты для инновационного развития и совершенствования способов оценки качества технических систем на этапе эксплуатации. Инновации основаны на том, что предлагаются для применения совокупность взаимосвязанных моделей, методик и реализующих их программ для ЭВМ, которые позволят сократить затраты времени и ресурсов на оценку качества систем и (или) элементов (изделий) в их составе.
        Представленные методики рекомендуется применять для принятия своевременных управленческих решений, основанных на результатах оценки качества технических систем в процессе эксплуатации.
        Ключевые слова: способ, система, оценка, рейтинг, качество, решение, методика, модель, принципы.

        Актуальность определения и обоснования инновационных путей развития и совершенствования способов оценки качества технических систем (далее - систем) основывается на современных требованиях к совершенствованию управления и принятия решений с целью повышения эффективности функционирования оцениваемых систем, что в свою очередь влечет необходимость разработки новых научно обоснованных методологических, технических, технологических решений [1, 2, 7, 11].
        Сегодня одним из рациональных путей разработки таких решений является создание новых и совершенствования существующих методик оценки качества, а также рациональных процедур обработки информации и моделей управления системами в процессе эксплуатации. При этом, учитывая тот факт, что эффективная и устойчивая эксплуатация систем имеет важнейшее значение для экономики, то создание и внедрение этих решений приобретает особую актуальность и вносит значительный вклад в развитие производственных сил [7, 9, 10]. Предлагаемые инновации направлены на снижения негативного влияния актуальной проблемы, состоящей в необходимости разрешения сложившихся противоречий между возрастающим объемом разнородных источников и динамикой структурированных и неструктурированных данных об исследуемых системах и объективной потребностью в принятии своевременных управленческих решений на основе существующих способов оценки качества [2, 8, 9, 11].
        Таким образом, сущность инновационного подхода будет заключаться в совершенствовании существующих способов оценки качества с учетом специфики эксплуатации и перспективных возможностей оцениваемых систем.
        В свою очередь реализация предлагаемого направления в инновационных подходах будет обеспечивать формирование комплекса научно обоснованных технических, технологических и методологических решений, которые на практике сформируют предпосылки для повышения эффективности управления без затрат дополнительных ресурсов [2, 9].
        Очевидно, что при подобной постановки задачи инновационное развитие и совершенствование способов будет направлено на обеспечение повышения эффективности управления и принятия решений на этапе эксплуатации систем за счет существенного снижения затрат времени и ресурсов на оценку качества путем разработки и внедрения комплекса инновационных методик оценки качества и программ для ЭВМ их реализующих (далее – методик).
        На практике внедрение методик позволит сократить время принятия обоснованных управленческих решений в условиях увеличения объема используемой информации для оценки качества систем.
        Современный уровень развития информационных технологий и информатизации общества дает основание рассматривать выше сформулированную проблему исследования во взаимосвязи с противоречиями управления организационно-техническими системами [1, С. 40]. При этом логично систематизировать по четырем уровням современные противоречия управления: личностный, общественный, технический и организационно-технический. В предлагаемом инновационном подходе разрабатываемая методология в основном применима на техническом и организационно-техническом уровнях [1, С. 42].
        В настоящее время разнообразие программно-аппаратных средств, других технических систем и современное развитие технологий обосновывает необходимость комплексного использования информации из внутренних и внешних информационных ресурсов. Развитие информационных технологий делает возможным формирование информационных резервов (далее – ИР) оценки качества систем.
        С учетом вышеизложенного рассматриваемые инновации основаны на трех принципах:
        Первый принцип заключается в обеспечении эффективности способов и минимизации риска принятия неправильного решения по результатам оценки качества.
        Второй принцип состоит в универсальности, практической направленности способов, дешевизне и простоте их реализации в различных системах без существенных затрат времени и ресурсов на использование средств связи, средств вычислительной техники и на подготовку персонала.
        Третий принцип состоит в структурно-функциональной классификации оцениваемых систем (изделий) в составе ИКС. Сущность данного принципа заключается в классификации множества оцениваемых систем вначале по признаку места эксплуатации (условиям эксплуатации). Далее предлагается разделение полученных групп систем на группы (подгруппы) в зависимости от их функционального предназначения и выполняемых задач [2, С. 199].
        Исходя из соблюдения этих принципов наиболее рациональным представляется для эффективного применения методик разработать универсальную модель сопровождения систем в процессе эксплуатации.
        Модель представляет собой описание процесса сопровождения систем в процессе эксплуатации с целью рациональной организации работы по совершенствованию и развитию систем. Разработанная Модель универсальна и применима для сопровождения систем различного назначения с учетом специфики их функционирования и возможных неблагоприятных условий. Модель предназначена для определения роли, места, содержания и наиболее рационального применения информационных резервов, методик и программного обеспечения оценки качества систем в процессе эксплуатации в условиях экономии времени и ресурсов [6, С. 105].
        Методика оценки эффективности систем. Методика разработана на основе модифицированного метода DEA [3 С. 612]. На практике методика реализуется программой для ЭВМ: Анализ и оценка эффективности систем. Свидетельство о государственной регистрации программ для ЭВМ № 2020610389, дата государственной регистрации 14.01.2020., автор Билятдинов К.З. Методика и программа для ЭВМ предназначены для анализа и оценки эффективности систем и (или) одной системы в различные периоды времени её функционирования.
        Методика, реализованная программой, выполняет все необходимые расчёты для построения таблиц, систематизации полученных значений корреляционной зависимости показателей функционирования систем, графиков и диаграмм. Позволяет сравнивать системы (периоды времени) по результату и (или) показателю функционирования систем, а также по соотношению израсходованных ресурсов и результата, а также составлять рейтинги систем.
        Основной положительный эффект, достигнутый от внедрения методики и программы: понятность и наглядность результатов, а также сокращение времени и ресурсов на оценку эффективности и повышение обоснованности управленческих решений за счет возможности сравнения и требуемой детализации израсходованных ресурсов при анализе и оценке эффективности системы [3 С. 617].
        Методика оценки устойчивости систем [2]. На практике методика реализуется программой для ЭВМ: Оценка устойчивости систем. Свидетельство о государственной регистрации программ для ЭВМ № 2020615328, дата государственной регистрации 21.05.2020., автор Билятдинов К.З. Назначение методики и программы для ЭВМ:
        1. Оценка устойчивости выполнения системами функции (функций) и (или) обеспечения устойчивости функционирования в зависимости от результатов воздействия неблагоприятных условий в заданный период времени.
        2. Расчет коэффициента устойчивости системы, как показателя качества в сфере обеспечения устойчивости.
        3. Анализ влияния значений показателей качества систем (изделий) на устойчивость системы и обоснование рекомендаций по повышению ее устойчивости за счет улучшения значений показателей качества систем (изделий) в ее составе (совершенствование эксплуатации, технического обеспечения и подготовки персонала, выбор направления модернизации или обоснование необходимости создания новых систем).
        Комплексная методика оценки качества систем в процессе эксплуатации. Основная последовательность действий при выполнении методики состоит в двух вариантах в зависимости от вида используемой при оценке информации: на основе статистических данных или статистической и экспертной информации. Сущность Методики заключается в поэтапном применении взаимосвязанных по заданным ячейкам расчетных таблиц для оценки качества с использованием статистической и экспертной информации с возможностью идентификации источников информации [4, С. 21].
        Способ рациональной работы с информационными ресурсами и формирования информационных резервов систем (далее – Способ) [2, 6]. На практике Способ реализуется программой для ЭВМ: Реализация способа рациональной работы с информационными ресурсами и формирования информационных резервов системы. Свидетельство о государственной регистрации программ для ЭВМ № 2020610335, дата государственной регистрации 13.01.2020., автор Билятдинов К.З.
        Назначение способа: для существенного уменьшения трудозатрат персонала и снижения времени сбора и обработки информации, а также для формирования и пополнения (поддержания в актуальном состоянии) информационных резервов (ИР) оценки качества в условиях возрастания объемов обрабатываемой информации.
        Сущность способа состоит том, что информация и применяемые программные средства систематизируются в информационных резервах, которые состоят из базы данных основных сведений о системе и архива. Рутинные задачи по сбору и обработке информации решаются последовательно в два этапа [6, С. 108].
        Первый этап – работа с ИР ИКС: задача решается путем обращения к ИР. При этом запрос вначале приходит в БД основных сведений о системе, а затем в архив. В большинстве случаев ЛПР (должностных лиц) удовлетворяет ответ из БД основных сведений о системе. В этом случае работа оканчивается с завершением первого этапа. При этом происходит пополнение информационных резервов системы сведениями о выполненной задаче.
        Второй этап – обращение к внешним информационным ресурсам.
        Если в информационных резервах системы в полном объеме не имеется необходимой информации, то далее, по решению должностного лица (ЛПР), происходит поиск информации в информационных ресурсах сети Интернет и (или) запрос направляется в другие ведомства и организации [6, С. 110].
        В заключении целесообразно особо отметить, что предложенные инновационные подходы позволяют повысить универсальность методологии. Поэтому методология и ее составные элементы могут эффективно применяться по трем основным направлениям:
        1. Повышение эффективности управления, эксплуатации и технического обеспечения оцениваемых систем.
        2. Повышение качества научно-исследовательских и опытно-конструкторских работ по созданию и модернизации систем.
        3. Повышение качества подготовки персонала в сфере эксплуатации и технического обеспечения аппаратно-программных средств и технических систем.
        Наиболее эффективным направлением применения разработанных способов (методик и программ для ЭВМ) является их применение как важнейшей части интеграционных резервов повышения эффективного функционирования систем в процессе эксплуатации.

        Список литературы
        1. Билятдинов К.З. Противоречия процесса управления в современном мире // Век качества №3 / Издательство: НИИ экономики связи и информатики "Интерэкомс" (Москва), ISSN: 2219-8210, 2014. – С.40 – 43.
        2. Билятдинов К.З., Меняйло В.В. Методология оценки качества систем в сфере устойчивости больших технических объектов // Век качества. №2. 2020. С. 198-214. ISSN: 2219-8210.
        3. Билятдинов К.З., Меняйло В.В. Модифицированный метод DEA и методика оценки эффективности технических систем // Информационные технологии, выпуск 11, 2020. С. 611-617.
        4. Билятдинов К.З. Комплексная методика оценки качества технических систем в процессе эксплуатации // Научно-технический вестник Поволжья. №11, 2020. – С. 20 – 23.
        5. Билятдинов К.З. Методика оценки вероятностных характеристик технических систем в процессе эксплуатации // Научно-технический вестник Поволжья. №12, 2020. – С. 21 – 24.
        6. Билятдинов К.З., Шестаков А.В. Создание и использование информационных резервов при сопровождении больших технических систем // Труды учебных заведений связи. Т.6. № 4. 2020. С. 104-110.
        7. Duer S. Assessment of the operation process of wind power plant’s equipment with the use of an artificial neural network // Energies. 2020. № 13. Art. 2437.
        8. Gerami J. An interactive procedure to improve estimate of value efficiency in DEA // Expert Systems with Applications. 2019. № 137. - Р. 29-45.
        9. Golabchi A., Han S., AbouRizk S. A simulation and visualization-based framework of labor efficiency and safety analysis for prevention through design and planning // Automation in Construction. 2018. Vol. 96. - Р. 310-323.
        10. Trevino M. Cyber Physical Systems: The Coming Singularity. PRISM. 2019. Vol. 8. No. 3. Pp. 2-13.
        11. Yazdi M. Introducing a heuristic approach to enhance the reliability of system safety assessment // Quality and Reliability Engineering International. 2019. Vol. 35(8). - Р. 2612-2638.
        emphasized text

        Speaker: Камиль Закирович Билятдинов (Национальный исследовательский университет ИТМО, Санкт-Петербург)
    • 15:30 18:00
      HPC 403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f
      • 15:30
        Intel oneAPI for xPU 30m
        Speaker: Dmitry Sivkov (Intel)
      • 16:00
        OpenMP computing of a reference solution for coupled Lorenz system on [0,400] 15m

        Obtaining a long term reference trajectory on the chaotic attractor for coupled Lorenz system is a difficult task due to the sensitive dependence on the initial conditions. Using the standard double-precision floating point arithmetic, we cannot obtain a reference solution longer than 2.5 time units. Combining OpenMP parallel technology together with GMP library (GNU multiple precision library), we parallelize Taylor series algorithm for coupled Lorenz system and obtain in 6 days a reference solution in the rather long time interval - [0,400].

        Speaker: Dr Ivan Hristov (Sofia University, FMI, Bulgaria)
      • 16:15
        Overlapping Computation and Communication in Matrix-Matrix Multiplication Algorithm for Multiple GPUs 15m

        In this talk, we discuss the optimal strategy for parallel matrix-matrix multiplication algorithm that minimizes the time-to-solution by finding the best parameters of the algorithm for overlapping multiplications of separate tiles in each GPU and data transfers between GPUs. The new algorithm developed for multi-GPU nodes is discussed [1]. The correlation is analyzed between the optimal parameters of the algorithm and the hardware specifications (e.g. the floating point performance and the memory bandwidth). The results are illustrated by the benchmarks made for different Nvidia GPU connected with PCIe or NVLink.

        [1] Choi Y. R., Nikolskiy V., Stegailov V. Matrix-Matrix Multiplication Using Multiple GPUs Connected by Nvlink // 2020 Global Smart Industry Conference (GloSIC). – IEEE, 2020. – С. 354-361.

        Speaker: Yea Rem Choi (HSE)
      • 16:30
        Verifiable application-level checkpoint and restart framework for parallel computing 15m

        Fault tolerance of parallel and distributed applications is one of the concerns that becomes topical for large computer clusters and large distributed systems. For a long time the common solution to this problem was checkpoint and restart mechanisms implemented on operating system level, however, they are inefficient for large systems and now application-level checkpoint and restart is considered as a more efficient alternative. In this paper we implement application-level checkpoint and restart manually for the well-known parallel computing benchmarks to evaluate this alternative approach. We measure the overheads introduced by creating and restarting from a checkpoint, and the amount of effort that is needed to implement and verify the correctness of the resulting programme. Based on the results we propose generic framework for application-level checkpointing that simplifies the process and allows to verify that the application gives correct output when restarted from any checkpoint.

        Speaker: Mr Ivan Gankevich (Saint Petersburg State University)
      • 16:45
        Development of information systems for theoretical and applied tasks on the basis of the HybriLIT platform 15m

        The report gives an overview of two information systems (IS) under development on the basis of the HybriLIT platform. The major goal of creating these ISs is to automate calculations, as well as to ensure data storage and analysis for different research groups.

        The information system for radiobiological research provides tools for storing experimental data of different types, a software set for analyzing behavioral patterns of laboratory animals and studying pathomorphological changes in the central nervous system after exposure to ionizing radiation and other factors. The given IS comprises blocks for storing and providing access to experimental data and a data analysis block based on machine and deep learning and computer vision algorithms.

        In addition, a virtual research environment (VRE) for modeling physical processes in complex systems based on Josephson junctions is being developed within the HybriLIT platform. The VRE combines convenient tools based on web technologies for creating models, an interface for performing calculations on the HуbriLIT heterogeneous computing platform and visualizing calculation results, as well as provides different research groups with an environment for organizing joint studies, exchanging models and calculation results.

        Speaker: Yuri Butenko (JINR)
      • 17:00
        Интеллектуальный анализ данных для повышения эффективности использования высокопроизводительной гетерогенной вычислительной платформы HybriLIT 15m

        Гетерогенная вычислительная платформа HybriLIT является частью
        многофункционального информационно-вычислительного комплекса Лаборатории
        информационных технологий им. М.Г. Мещерякова Объединенного института ядерных
        исследований. Был проведен анализ данных по использованию платформы HybriLIT:
        особое внимание уделено исследованию информации об используемых ресурсах при
        запуске задач различными пользователями и времени их выполнения.
        Актуальность данного исследования заключается в возможности спрогнозировать
        дальнейшую загруженность платформы на основе полученного анализа, что позволит
        более рационально и эффективно использовать не только имеющиеся вычислительные
        ресурсы, но и ресурсы систем хранения данных.
        Будут представлены результаты анализа данных, рассмотрены различные модели
        для прогнозирования использования ресурсов платформы HybriLIT. По результатам
        верификации выбрана модель, показавшая наилучшую точность.

        Speaker: Ekaterina Polegaeva (Dubna University)
      • 17:15
        The grid-characteristic method for applied dynamic problems 15m

        Due to the rapid development of high-performance computing systems, more and more complex and time-consuming computer simulations can be carried out. It opens new opportunities for scientists and engineers. A standard situation for scientific groups now is to have an own in-house research software, significantly optimized and adopted for a very narrow scientific problem. The main disadvantage of this approach is the necessity to support a lot of computer programs. It leads to the code duplication and non-effective use of researchers’ time. To overcome it, a uniform approach may be used in connection with the modular structure of the in-house software.
        In the current work, the numerical solution of dynamic linear hyperbolic systems is considered. They describe wave problems and are widely used in earth seismicity simulations, seismic survey processes, non-destructive testing of composites, etc. The grid-characteristic method on structured meshes can be successfully applied to this class of problems. However, for a general hyperbolic system the numerical calculation or storage of transformation matrices is necessary. To overcome this drawback, they can be analytically precalculated and incorporated (as separate functions) in the solver source code. The second challenge is the dependence of the matrix spectrum on the medium rheology. It prevents the usage of a single mesh for the whole computational domain. The procedure of explicit contact correction can eliminate this challenge.
        The described uniform approach was successfully applied to simulate:
        - seismic wave propagation in porous fluid-saturated media;
        - seismic processes in ice;
        - dynamic behavior of thawed zones;
        - dynamic loading of fractured media;
        - dynamic processes in complex elastic geological models;
        - acoustic diagnostic of the heterogeneity of the damaged zone;
        - elastic wave propagation in vertically transversely isotropic and general anisotropic media.
        To achieve enough computational speed on large grids the general framework designed by Khokhlov N.I. at MIPT was used. It is parallelized with OpenMP, MPI and GPGPU technologies with a good scalability up to thousands of cores.

        Speaker: Dr Vasily Golubev (Moscow Institute of Physics and Technology)
    • 15:30 17:30
      Research infrastructure Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095
      • 15:30
        A virtual testbed for optimizing the performance of a new type of accelerators 15m

        The concept of a "virtual testbed", namely the creation of problem-oriented environment for full-featured modeling for the investigated phenomenon or behavior of a complex technical object,
        has now acquired a finished look. The design of this concept contributed to the development of complex mathematical models suitable for full-fledged computational experiments and improvement of a computer platform capable of providing the expected results.
        The traditional type of "virtual testbed" is the computing environment for modeling of complex processes, the physics of which is fully known. In this case, based on the first principles, it is possible to carry out full simulation of the investigated phenomena. However, there is another type of "virtual testbed" when versatile modeling allows get closer to understanding the phenomena, the physics of which is not yet fully clear, and which require various experiments to understand the essence of phenomena. For this, it is necessary to create experimental installations, that are expected to receive new results. There is still other option, the solution of inverse problem, that can be mathematically incorrect and need a lot of direct simulations for optimization.
        The concept of a "virtual accelerator" emerged for quite some time now. Its key idea is to simulate dynamics of a beam using several software packages (COZY Infinity, MAD and others),compiled in the form of pipelines and running on distributed computing resources. The main application of such a virtual accelerator is simulation of beam dynamics with various packages with the possibility of comparing the results and the ability to create pipelines of tasks when the results of one stage of processing based on a specific software package can be sent as initial data for another processing step. However, this approach for a new type of accelerators (such as NICA) is no longer meets all the requirements for a virtual accelerator.
        The report presents the results of building a prototype of a virtual accelerator on the example of NICA with a description of methods for accelerating of computational procedures.

        Speaker: Maria Mingazova (St.Petersburg State University)
      • 15:45
        New hardware testing methodology at IHEP data center 15m

        The modern computing center is not only a production capacity, it is also a stable work. Stability means not only software, the reliability of the software component. It is also assembly components. It is important to ensure this before the production. One of the ways is testing components for performance, reliability, assembly defects. In this work we will present the methodology for identifying problem points, the specific tests and some results of their work based on IHEP data center.

        Speaker: Victoria Ezhova (IHEP)
      • 16:00
        Modeling network data traffic for vulnerability scan using the TrafficREWIND test bench infrastructure of TIER1 data centers at JINR 15m

        Modeling network data traffic is the most important task in the design and construction of new network centers and campus networks. The results of the analysis of models can be applied in the reorganization of existing centers and in the configuration of data routing protocols based on the use of links. The paper shows how constant monitoring of the main directions of data transfer allows optimizing the payload of links by methods of increasing the priority of a different type of traffic. The basic elements for solving this problem are given, which are various ways of coloring data. Today it can be implemented with the help of variable length subnet masks, additional fields in the transmitted frame “quality of service” (QoS) and Deep Packet Inspection (DPI). One of the newest ways is the method of mirroring visualizing the network at the application level (OSI level 7 model). The paper presents a plan for the deployment of a similar system in TIER1 and TIER2 centre at JINR using the Ixia TrafficREWIND technology as an example. An initial analysis of the traffic distribution in the data center is made, graphs are shown and conclusions are drawn on the implementation of the necessary measures to reduce the use of links. It is shown how you can scan for vulnerabilities on a data traffic model. The conclusion shows the benefits and disadvantages of method mirroring data.

        Speaker: Andrey Baginyan (ccnp)
      • 16:15
        IPv6 dual-stack deployment for the distributed computing center 15m

        Computing Center of the Institute for High Energy Physics in Protvino provides computing and storage resources for various HEP experiments (Atlas, CMS, Alice, LHCb) and currently operates more than 150 working nodes with around 3000 cores and provides near 2PB of disk space. All resources are connected through two 10Gb/s links to LHCONE and other research networks. IHEP computing center has IPv4 address space limited to one C-sized and all working nodes are installed behind the NAT which has some drawbacks for production use. To optimize routing, switching and to get higher network throughput for data transfer the IPv6 dual-stack deployment was made for the computing farm. In this work the full cycle of
        the real IPv6 dual-stack deployment from zero to production will be shown. This work can be used by other WLCG centers and all other data centers for distributed computing as an example of necessary steps and configurations which have to be made.

        Speaker: Viktor Kotliar (IHEP)
      • 16:30
        Services of computational neurobiology tasks, based on the distributed modular platform «Digital Laboratory» NRC «Kurchatov Institute» 15m

        This report will present services for performing computational neurobiology tasks for working with experimental data from nuclear magnetic resonance imaging of the human brain. These Services are created as separate modules based on the "Digital Laboratory platform" NRC "Kurchatov Institute".
        On the basis of the distributed modular platform «Digital Laboratory» at NRC "Kurchatov Institute" NBICS Complex an information and analytical environment was organized as a system that combines the scientific equipment of the Resource Centers, the Computing Center, virtual machines and personal computers of Scientific’s laboratories into a single virtual space, while organizing the exchange of data between various buildings, their processing, analysis and storage. The work with the system is carried out through the user web interface.

        Speaker: Irina Enyagina (Kurchatov Institute)
      • 16:45
        Precomputation formal verification of HPC cluster applications using SOPN Petri nets 15m

        In this work, we present a formal mathematical model and software library for modelling hardware components and software systems based on the SOPN Petri network and CSharp programming language.
        A discrete stochastic model denoted as SOPN, is presented, which combines the qualities of coloured, hierarchical and generalized Petri nets. The model is a series of extensions over the necessary apparatus of stochastic Petri nets, which allows a direct transition from the SOPN model to the basic model of stochastic Petri nets.
        Several additional terms were introduced into the model to describe the grouping of elements of the hierarchy of service systems components. The paper presents some theorems that show the resulting models' significant properties from the point of view of the possibilities of their composition. The developed model has many new properties for describing complex computing systems based on Petri nets. So, for example, messages in service systems can be split into packets, some of which are lost and recovered during transmission. On the receiver's side, the packages must be collected back into a single message.
        To accomplish this task, it is necessary to support an important property new for Petri nets: decomposition and merging of data elements-labels. This, in turn, requires the ability to identify tags, presented in the proposed SOPN model.
        To work correctly with complex data types, the model allows us to consider various levels of the logic of interaction between system components and simulate potential software and hardware levels' potential problems.
        A method for organizing an object model based on Petri nets for modelling software systems was created, based on the division of the graph of the logic of operations on elements and their storage locations. The presented method makes it possible to solve the posed problem while maintaining backward compatibility with the model of stochastic Petri nets. Based on the obtained solution, it is possible to represent intricate HPC applications interaction patterns.
        To model natural systems within the presented logic framework, a methodology was developed for assessing the performance and identifying bottlenecks of a distributed service system, taking into account potential infrastructure problems.
        The reliability of the model is confirmed by the correspondence of the constructed discrete models' behaviour to the analytical indicators for the Amdahl and Gustavsson laws. The paper presents an example of testing the system on the MPI architecture of an application running on a cluster with the Fat Tree architecture and changing its behaviour when varying network equipment errors.

        Speaker: Oleg Iakushkin (Saint-Petersburg State University)
      • 17:00
        Transformer-based Model for the Semantic Parsing of Error Messages in Distributed Computing Systems in High Energy Physics 15m

        Large-scale computing centers supporting modern scientific experiments store and analyze vast amounts of data. A noticeable number of computing jobs executed within the complex distributed computing environments ends with errors of some kind, and the amount of error log data generated every day complicates manual analysis by human experts. Moreover, traditional methods such as specifying regular expression patterns to automatically group error messages become impractical in a heterogeneous computing environment without a well-defined structure of error messages. ClusterLogs framework for error message clustering was developed to address this challenge. The framework can discover common patterns in error messages from various sources and group them together. One of the essential results of this process is the clear automated description of the resulting clusters, which will be used for the analysis.
        In this research, we propose that interpreting error messages as a natural language allows us to use transformer-based deep learning models such as BERT for this task. A model for extracting the relevant part of messages was trained and integrated into ClusterLogs to represent each cluster as a few actionable items, ensuring better interpretation and validation of the results of clustering.

        Speaker: Dmitry Grin
    • 09:00 10:30
      Plenary reports Conference Hall

      Conference Hall

      Conference Hall, 5th floor
      • 09:00
        Accounting and monitoring infrastructure for Distributed Computing in the ATLAS experiment 45m

        The ATLAS experiment uses various tools to monitor and analyze the metadata of the main distributed computing applications. One of the tools is fully based on the unified monitoring infrastructure (UMA) provided by the CERN-IT Monit group. The UMA infrastructure uses modern and efficient open-source solutions such as Kafka, InfluxDB, ElasticSearch, Kibana and Grafana to collect, store and visualize metadata produced by data and workflow management systems. This software stack is adapted for the ATLAS experiment and allows the development of dedicated monitoring and accounting dashboards in Grafana visualization environment. The current state of the monitoring infrastructure and overview of core monitoring and accounting dashboards in the ATLAS are presented in this contribution.

        Speaker: Mr Aleksandr Alekseev (Ivannikov Institute for System Programming of the RAS)
      • 09:45
        Information technologies based on DNA. Nanobioelectronics 45m

        DNA molecular is a clear example of data storage and biocomputing. Performing millions of operations simultaneously DNA – biocomputer allows the performance rate to increase exponentially. The limitation problem is that each stage of paralleled operations requires time measured hours or days. To overcome this problem can nanobioelectronics.
        The central problem of nanobioelectronics is realization of effective charge transfer in biomacromolecules. The most promising molecule for this goal is DNA. Computer simulation of charge transfer can make up natural experiment in such complex object as DNA. Such processes of charge transport as Bloch oscillations, soliton evolution, polaron dynamics, breather creation and breather inspired charge transfer are modeled. The supercomputer simulation of charge dynamics at finite temperatures is presented. Different molecular devices based on DNA are considered.
        The work is supported by RFBR project N 19-07-0046

        Speaker: Mr Victor Lakhno (Institute of Mathematical Problems of Biology RAS – the Branch of Keldysh Institute of Applied Mathematics of Russian Academy of Sciences )
    • 10:30 11:00
      Coffee 30m
    • 11:00 12:30
      Big data Analytics and Machine learning. 407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      407 or Online - https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b

      https://jinr.webex.com/jinr/j.php?MTID=m573f9b30a298aa1fc397fb1a64a0fb4b
      • 11:00
        Neural network approach to the problem of image segmentation for morphological studies at LRB JINR 15m

        The report will present the results on the development of the algorithmic block of the Information System (IS) for radiobiological studies, created within a joint project of MLIT and LRB JINR, in terms of solving the segmentation problem for morphological research to study the effect of ionizing radiation on biological objects. The problem of automating the morphological analysis of histological preparations is solved in the frames of the project by implementing algorithms based on the neural network approach and computer vision methods.
        The results of the investigations will be used in the development of the algorithmic block of the BIOHLIT information system.

        Speaker: Oksana Streltsova (Meshcheryakov Laboratory of Information Technologies, JINR)
      • 11:15
        Algorithms for behavioral analysis of laboratory animals in radiobiological research at LRB JINR 15m

        Speakers: Alexey Stadnik (Meshcheryakov Laboratory of Information Technologies, JINR) , Dina Utina (LRB JINR)
      • 11:30
        ON DEEP LEARNING FOR OPTION PRICING IN LOCAL VOLATILITY MODELS 15m

        Existence of exact closed-form formula for the price of derivative is a rather rare event in derivative pricing, therefore, to determine the price of derivative, one has to apply various numerical methods, including finite difference methods, binomial trees and Monte Carlo simulations. Alternatively, derivative prices can be approximated with deep neural networks.

        We study pricing of European call and put options with deep neural networks under assumption that the volatility is a function of underlying asset price and time (option pricing in local volatility model). We apply recently introduced deep learning algorithm for solving partial differential equations (DGM algorithm) and investigate its performance when pricing options in local volatility models with known exact closed-form solutions.

        We introduce enhancements to the commonly used neural network architecture and loss function of the option pricing problem to improve the accuracy of approximation and to ensure convergence of the neural network training. We consider enhanced deep learning algorithm for pricing options and its implementation with TensorFlow framework. Option pricing with deep neural networks and hardware-accelerated TensorFlow framework on macOS operating system is also discussed. Native hardware acceleration is based on Apple’s ML Compute framework capabilities.

        Speaker: Sergey Shorokhov (RUDN University)
      • 11:45
        Analytical platform for socio-economic studies 15m

        Started in natural sciences, the high demand for analyzing a vast amount of complex data reached such research areas as economics and social sciences. Big Data methods and technologies provide new efficient tools for researches. In this paper, we discuss the main principles and architecture of the digital analytical platform aimed to support socio-economic applications. Integrating specific open-source solutions, the platform intended to cover full-cycle data analysis and machine learning experiments, from data gathering to visualization. One of the system's primary goals is to deliver the advantage of the cloud and distributed computing and GPU accelerators with Big Data analysis techniques. The authors present the approach of building the platform from low-level services such as storage, virtual infrastructure, pass-through authentication, up to data flows processing, analysis experiments, and results representation.

        Speaker: Sergey Belov (Joint Institute for Nuclear Research)
      • 12:00
        Применение методов машинного обучения для кросс-классификации алгоритмов и задач многомерной непрерывной оптимизации 15m

        Предлагаемая работа посвящена разработке программной системы для проведения взаимной классификации семейств популяционных алгоритмов оптимизации и задач многомерной непрерывной оптимизации. Одной из целей настоящего исследования является разработка методов предсказания эффективности работы включенных в систему алгоритмов и выбора из них наиболее эффективных алгоритмов для решения заданной пользователем задачи оптимизации. Кроме того, предлагаемая программная система может быть использована для расширения существующих тестовых наборов новыми задачами оптимизации. Работа выполнена при финансовой поддержке РФФИ (грант № 20-07-01053 А).

        Speaker: Mr Андрей Чепурнов (ф-т ВМК МГУ им. М.В. Ломоносова)
      • 12:15
        Multi-instance learning for Rhetoric Structure Parsing 15m

        To accurately detect texts containing elements of hatred or enmity, it is necessary to take into account various features: syntax, semantics and discourse relations between text fragments. Unfortunately, at present, methods for identifying discourse relations in the texts of social networks are poorly developed. The paper considers the issue of classification of discourse relations between two parts of the text. The RST Discourse Treebank dataset (LDC2002T07) is used to assess the performance of the methods. The dataset is a small manually marked up corpus of texts, divided into training and test samples. Since the size of this dataset is too small for training large language models, the work uses a model-prefitting approach. Model prefitting is performed on the reddit news portal user comment dataset. Texts from this dataset are marked up automatically. Since automatic marking is less accurate than manual marking, the multiple-instance learning (MIL) method is used to train models. In the end, the resulting model will be used as part of a text analyzer for detecting elements of hatred or enmity in the texts of social networks. A distinctive feature of modern language models is a large number of parameters. Using several models at different levels of such a text analyzer requires a lot of resources. Therefore, for the analyzer to work, it is necessary to use high-performance or distributed computing. The use of grid systems from personal computers can allow attracting and combining computing resources to solve this type of problem.

        This work was funded by RFBR according to the research project No. 21-011-44242

        Speaker: Mr Sergey Volkov (Peoples' Friendship University of Russia (RUDN University); Federal Research Center "Computer Science and Control" RAS)
    • 11:00 12:30
      Data Management, Organization and Access Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095

      https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095
      • 11:00
        Increasing the accuracy of the diagnosis of mental disorders based on heterogeneous distributed data 15m

        The medical field, and especially diagnosis, is still an extremely poorly formalized field. This is especially true in the study of diseases associated with changes and disorders in the activity of the brain. In order to improve the results of medical research in this area, various methods of analyzing the condition of patients are used. These include both instrumental methods (MRI, EEG) and traditional medical and psychological research methods (blood tests, interviews, psychological testing, etc.)
        For each of these studies, certain conclusions are made about the patient's condition and his diagnosis. However, a qualitative combination of these conclusions can only be made by an experienced physician, who most often makes a diagnosis on the basis of his own experience and uses research data only as arguments "for" or "against". The use of mathematical methods for combining and analyzing heterogeneous data makes it possible to formalize the conclusions via those sources and increase the accuracy of the diagnosis. However, on the way of applying the exact methods of mathematics and computer science, various problems of both objective and subjective nature arise.
        Firstly, all the mentioned data is in a different format, even if it is presented in digital form. In addition, they are most often distributed across various nodes and this data needs to be consolidated for general processing.
        Secondly, in accordance with the law on the protection of personal data, especially related to medical information, simple data consolidation is not sufficient in this case. A procedure for anonymizing data is required for further statistical processing. Moreover, this procedure itself has its own characteristics in comparison with the developed general methods of information depersonalization.
        Thirdly, the use of a large number of information sources leads to an increase in dimension of data processed. This, in turn, necessitates a dramatic increase in the sample size of patients for reliable statistical analysis. In practice, it may be impossible to achieve a significant increase in volume for various reasons. Therefore, the question arises of how to use these distributed heterogeneous data of small volume in the number of patients to improve the accuracy and validity of the conclusion about the patient's condition.
        The report presents the results of dividing a relatively small sample of patients into stable classes with their refinement based on additional studies.

        Speakers: Alexander Degtyarev (Professor) , Alexander Bogdanov (St.Petersburg State University)
      • 11:15
        Error detection in data storage systems and distributed voting protocols 15m

        The problems of silent data corruption detection in the data storage systems (Reed-Solomon codes) and faulty share detection in the distributed voting protocols (Shamir scheme) are treated from a uniform point of view. Namely, the both can be interpreted as the problem of systematic error detection in the data set { (x_1, y_1),...(x_N,y_N)} generated by a polynomial function y=f(x) in some finite field. We suggest a method of solution of this problem based on construction of the error locator polynomial in the form of the appropriate Hankel polynomial generated by symmetric functions of the data set.

        Speaker: Alexei Uteshev (St.Petersburg State University)
      • 11:30
        Passwordless Authentication Using Magic Link Technology 15m

        Nowadays, the problem of identification and authentication on the Internet is more urgent than ever. There are several reasons for this: on the one hand, there are many Internet services that keep records of users and differentiate their access rights to certain resources; on the other hand, cybercriminals' attacks on web services have become much more frequent lately. At the same time, in many cases, the weak point of systems exposed to attacks is precisely the authentication system.
        Many different authentication methods have been developed and are in use today. For their classification, the factor on which their principle of operation is based is mainly used - the knowledge factor, the ownership factor, or the inherence factor.
        Authentication methods based on the knowledge factor (e.g. password protection) are the most common and are applied almost everywhere. Their advantages are ease and low cost of implementation. On the other hand, such systems are often vulnerable to various kinds of attacks. It is estimated that up to 80% of successful hacker attacks (including attacks on the largest services with millions of users) succeeded precisely because of the weakness of the password protection system.
        In this paper,passwordless authentication methods are considered. Systems based on such methods have a number of advantages – ease of use, protection against many common types of attacks, and the lack of need to create a large number of passwords. Passwordless authentication technologies are increasingly widespread, and are already in use by a number of large companies – Google, Medium, etc.
        In particular, the magic link technology is considered. Using it, the end user does not need to use a password to register or log in to the system – just to enter an email address and follow the link sent by the authentication system. The link is unique, and authorization with its help is possible only for a specific user and only for a limited time. This approach not only greatly simplifies the process of registering new users and relieves them of the need to remember passwords, but also provides reliable protection against a number of attacks related to password theft or brute-force attacks.
        An authentication system has been implemented using Keycloak. Keycloak is an open-source software product that implements single sign-on technology, in which a user can switch from one system to another connected to the first one without re-authentication.
        Thus, this paper presents a solution to the problem of passwordless authentication, which can be applied in a number of online services and systems. In the future, it is possible to further improve the system, in particular, using adaptive authentication, which allows switching between different authentication mechanisms depending on certain factors.

        Speaker: Iurii Matiushin (Saint Petersburg State University)
      • 11:45
        RISK MODEL OF APPLICATION OF LIFTING METHODS 15m

        The article discusses the main provisions (methods, risk models, calculation algorithms, etc.) of the issue of organizing the protection of personal data (PD), based on the application of anonymization procedure. The authors reveal the relevance of the studied problem based on the tendency of the general growth of informatization and the further development of the Big Data technology. This circumstance leads to the need to use the so-called risk approach based on calculating the risk of PD as a probabilistic assessment of the amount of possible damage that the owner of the data resource may incur as a result of a successfully carried out information attack. For this purpose, the article describes an algorithm for calculating the risk of PD and proposes a risk model of the depersonalization procedure, which considers confidentiality problems arising both as a result of unauthorized access and as a consequence of planned data processing. To describe the risk model of the anonymization procedure, the types of attacks on the confidentiality of personal data, anonymization metrics and equivalence classes are analyzed, as well as the attacker's profiles and data distribution scenarios. Thus, the choice of a risk model for the depersonalization procedure was justified, and calculations for the generated synthetic set of PDs were presented. As a conclusion, it should be noted that the model of anonymization risk assessment proposed and tested on synthetic data makes it possible to abandon the concept of guaranteed anonymized data, introducing certain boundaries for working with risks and building a continuous process for assessing PD threats, taking into account the constantly growing volume of stored and processed information.

        Information protection, personal data, depersonalization, information systems, model, risk of depersonalization procedure.

        Speaker: Mr Aleksandr Dik (Saint Petersburg State University)
      • 12:00
        Solving the problems of Byzantine generals using blockchain technology 15m

        The process of digitalization of the Russian economy as the basis for the transition to the digital economy is conditioned by the requirements of objective reality and is based, first of all, on the introduction of digital technologies into the activities of its actors. The most promising is the Blockchain technology, which has the capabilities of the most effective coordination of the economic interests of the actors of the digital economy and is applicable in various spheres of economic activity. The article discusses the basics of cryptocurrencies and blockchain operation, as well as the technologies in which this "Tasks of Byzantine generals" (decision-making tasks) occurs. A comparison is made for solution of the problem with different blockchain technologies with several platforms that prevent behavior dangerous for transactions network and thereby increasing the competitiveness of the cryptocurrency.

        Speaker: Jasur Kiyamov (St.Petersburg State University)
    • 11:00 12:30
      HPC 403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      403 or Online - https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f

      https://jinr.webex.com/jinr/j.php?MTID=mf93df38c8fbed9d0bbaae27765fc1b0f
      • 11:00
        Features of HPC resources for HEP 15m

        In the wake of the success of the integration of the Titan supercomputer into the ATLAS computing infrastructure, the number of such projects began to increase. However, it turned out that it is extremely difficult to ensure efficient data processing on such types of resources without deep modernization of both applied software and middleware. This report discusses in detail the problems and ways to solve them using such resources in the field of HEP experiments with their advanced and highly automated data processing systems.

        Speaker: Artem Petrosyan (JINR)
      • 11:15
        Energy analysis of plasma physics algorithms 15m

        High-performance supercomputers became one of the biggest power consumption machines. The top supercomputer's power is about 30mW. Recent legislative trends in the carbon footprint area are affecting high-performance computing. In our work, we collect energy analysis from different kinds of Intel's server CPUs. We present the comparison of energy efficiency of our new Poissons's solver, which is useful for plasma physics and astrophysics computations, for different kinds of CPUs. This work supported by RSCF grant No. 19-71-20026.

        Speaker: Igor Chernykh (Institute of Computational Mathematics and Mathematical Geophysics SB RAS)
      • 11:30
        VM based Evaluation of the Scalable Parallel Minimum Spanning Tree Algorithm for PGAS Model 15m

        The minimum spanning tree problem has influential importance in computer science, network analysis, and engineering. However, the sequential algorithms become unable to process the given problem as the volume of the data representing graph instances overgrowing. Instead, the high-performance computational resources pursue to simulate large-scale graph instances in a distributed manner. Generally, the standard shared or distributed memory models like OpenMP and Message Passing Interface are applied to address the parallelization. Nevertheless, as an emerging alternative, the Partitioned Global Address Space model communicates in the form of asynchronous remote procedure calls to access distributed-shared memory, positively affecting the performance using overlapping communications and locality-aware structures. The paper presents a modification of the Kruskal algorithm for MST problems based on performance and energy-efficiency evaluation relying on emerging technologies. The algorithm evaluation shows great scalability within the server up to 16 vCPU and between the physical servers coupled with a connected weighted graph using different vertices, edges, and densities.

        Speaker: Vahag Bejanyan
      • 11:45
        Data storage systems of "HybriLIT" Heterogeneous computing platform for scientific research carried out in JINR: filesystems and RAIDs performance research 15m

        "HybriLIT" Heterogeneous platform is a part of the Multifunctional Information and Computing Complex (MICC) of the Laboratory of Information Technologies named after MG Meshcheryakov of JINR, Dubna. Heterogeneous platform consists of Govorun supercomputer and HybriLIT education and testing polygon. Data storage and processing system is one of the platform components. It is implemented using distributed and parallel filesystems (NFS, EOS, Lustre). Platform performance depends on many factors, including performance of storage and file systems.
        The best storage performance for wide variety of user jobs may be obtains with optimal filesystem parameters. The number of tests of local filesystems (EXT family and XFS) was carried out. There were empirically obtained an optimal parameters o data storage system at which the performance have been high results.
        The new methodology was developed for analyzing the obtained measurements of IOPS (input-output operations per second) and Latency (milliseconds) for results evaluations.
        Various filesystems were analyzed by the developed methodology. The conclusion was drawn about of optimal parameters of the investigated filesystems.

        Speaker: Aleksander Kokorev
      • 12:00
        HPC workload balancing algorithm for co-scheduling environments. 15m

        Commonly used job schedulers in high-performance computing environments do not allow resource oversubscription. They usually work by assigning the entire node to a single job even if the job utilises only a small portion of nodes’ available resources. This may lead to cluster resources under-utilization and to an increase of job wait time in the queue. Every job may have different requirements for shared resources (e.g. network, memory bus, IO bandwidth or cpu cores) and they may not overlap with requirements of other jobs. Because of that, running non-interfering jobs simultaneously on shared resources may increase resource utilization.

        Without accounting for jobs resource requirements and their performance degradation due to shared resources, co-scheduling may only decrease job performance and overall scheduler throughput. In this work, we propose a method for measuring job run-time performance and an algorithm for selecting and running combinations of jobs simultaneously on shared resources.
        Performance metrics were validated on experimental data and an algorithm was derived from a mathematical model, tested on numerical simulations and implemented in the scheduler.

        Speaker: Ruslan Kuchumov (Saint Petersburg State University)
      • 12:15
        Characteristics of Nvidia CUDA and AMD ROCm Platforms Affecting Performance Portability 15m

        The development and popularization of the AMD ROCm platform with HIP technology allows one to create code that is not locked to a specific vendor maintaining a high level of performance. A lot of legacy but still supported codes is originally written in CUDA, and now it is getting ROCm HIP support as well. In a recent paper [1], the performance of popular molecular dynamics packages with GPU support was discussed in detail. The research includes the LAMMPS package providing backends for CUDA, OpenCL, and HIP. Based on this package, we can compare and define in detail the platform properties and performance impact of real parallel code. Differences can be found in the characteristics of the target hardware, the operation of the software environment and drivers, and even in the logic of the application code itself. In continuation of the study, the work of computational GPU kernels in the application using several MPI processes for each GPU is considered.

        1. Kondratyuk N, Nikolskiy V, Pavlov D, Stegailov V. GPU-accelerated molecular dynamics: State-of-art software performance and porting from Nvidia CUDA to AMD HIP. The International Journal of High Performance Computing Applications. April 2021. doi:10.1177/10943420211008288
        
        Speaker: Vsevolod Nikolskiy (HSE)
    • 11:00 12:30
      Virtualization 310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      310 or Online - https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612

      https://jinr.webex.com/jinr/j.php?MTID=m326d389213a5963a1114b8cbf9613612
      • 11:00
        Quantitative and qualitative changes in the JINR cloud infrastructure 15m

        High demand for the JINR cloud resources facilitated its sufficient growth. That triggered changes needed to be done to overcome problems encountered and to keep QoS for users: main part of computational resources was re-organized as pre-deployed worker nodes of HTCondor-based computing element to decrease the load on OpenNebula services during mass jobs submission, new SSD-based ceph pool for RBD VMs disks with strong disk I/O requirements, dedicated ceph-based storage for the NOvA experiment, re-organized from scratch Infrastructure-as-a-Code approach based on role and profile model implemented with help of foreman and puppet, migration to prometheus-based software stack for resources monitoring and accounting and some other changes.

        Speaker: Nikolay Kutovskiy (JINR)
      • 11:15
        The use of distributed clouds for scientific computing 15m

        Nowadays, cloud resources are the most flexible tool to provide access to infrastructures for establishing services and applications. But, it is also a valuable resource for scientific computing. In the Joint Institute for Nuclear Research computing cloud was integrated with the DIRAC system. That allowed submission of scientific computing tasks directly to the cloud. With that experience, the cloud resources of some organizations from the JINR Member States were integrated in the same way. That increased the total amount of cloud resources accessible in a uniform way through the DIRAC - in the scope of the so-called distributed information and computing environment (DICE). Folding@Home tasks related to the SARS-CoV-2 virus were submitted to all available cloud resources. Apart from useful scientific results, such experience was also helpful in getting information about the performance, limitations, strengths, and weaknesses of the united system. Based on the gained experience, the DICE infrastructure was tuned to successfully complete real user jobs related to Monte-Carlo generation for the Baikal-GVD experiment.

        Speaker: Igor Pelevanyuk (Joint Institute for Nuclear Research)
      • 11:30
        Минимизация образов корневых файловых систем Docker контейнеров 15m

        В последнее время при разработке и тестировании приложений достаточно часто обращаются к контейнерам. Это обусловлено тем, что контейнер – это удобный в использовании и легковесный инструмент. Контейнер собирается на основе образа, который является шаблоном будущего контейнера, а также с помощью образа контейнер передается по сети. Размер образов может достигать нескольких гигабайт. Таким образом, если требуется передать по сети приложение, образы контейнеров которого достаточно объемны, либо присутствует физическое ограничение доступной памяти (например, при разработке IoT систем), необходима минимизация размера образа.
        Образы контейнеров строятся на основе ядра операционной системы Linux, и вся работа с образами – это инструкции оболочки. Для интерактивной работы с контейнерами они включают в себя инструкции и файлы, с которыми приложение внутри контейнера никогда не работает. Сами приложения, которые будут запущены в контейнере, и их количество заранее известны, поэтому при помощи средств отладки Linux можно понять, какие файлы используются приложениями в контейнере, а какие можно исключить из него. Именно так работает приложение chainsaw.
        В данной работе представлены результаты исследования нескольких разных образов Docker контейнеров. Было получено уменьшение в 3 раза для некоторых из образов, а также установлено, что на размер уменьшенного образа существенно влияют используемый базовый образ и язык программирования, на котором написано приложение, работающее внутри контейнера.

        Speaker: Irina Nikolaeva
      • 11:45
        Resource Management in Private Multi-Service Cloud Environments 15m

        The JINR cloud infrastructure hosts a number of cloud services to facilitate scientific workflows of individual researchers and research groups. While the batch processing systems are still the major compute power consumers of the cloud, new auxiliary cloud services and tools are being adopted by researchers and are gradually changing the landscape of the cloud environment. While such services, in general, are not so demanding in terms of computational capacity, they can still have spikes of demand and can dynamically scale to keep the service availability at a reasonable level. Moreover, these services might need to compete for the resources due to the limited capacity of the underlying infrastructure. In this talk we’ll discuss how resource distribution could be managed in such a dynamic environment with the help of a cloud meta-scheduler.

        Speaker: Mr Nikita Balashov (JINR)
      • 12:00
        Evaluating Different Options for Scientific Computing in Public Clouds 15m

        Cloud computing has emerged as a new paradigm for on-demand access to a wast pool of computing resources that provides a promising alternative to traditional on-premises resources. There are several advantages of using clouds for scientific computing. Clouds can significantly lower time-to-solution via quick resource provision, skipping the lengthy process of building a new cluster on-premises or avoiding long queue wait times on shared computing facilities. By providing a wide range of possible virtual machine configurations, clouds allow to easily adapt to changing workloads. Clouds can also reduce the total cost of ownership by allowing dynamic auto-scaling of computing resources depending on the current workload, or by leveraging spot instances that represent excess cloud capacity. A new serverless computing model has become popular recently, which enables users to seamlessly execute the so called cloud functions without having to manually manage and scale virtual machine instances.

        Nowadays public clouds provide many options for running computing tasks ranging from manually managed on-demand virtual machines and HPC clusters to preemptible spot instances and cloud functions. This brings up several questions: which options are suitable for which kind of applications and use cases, what are their advantages and drawbacks, and how these options compare to traditional computing resources such as on-premises clusters. To answer these questions, we have implemented support for using the mentioned options as computing resources for running applications on Everest, a web-based distributed computing platform. This platform provides users with tools to publish and share computing applications as web services, and manages the execution of these applications on user-provided computing resources. Since Everest already supports the use of on-premises servers and clusters as such resources, this allowed us to evaluate and compare the new cloud-based resources against the traditional ones for execution of typical scientific computing applications such as bag-of-tasks and workflows. This approach also enables simple migration of existing applications to these new resources.

        In this report we will describe the implementation of new cloud-based resources for Everest and will present the results of their experimental evaluation and comparison.

        Speaker: Dr Oleg Sukhoroslov (IITP RAS, NRU HSE)
      • 12:15
        Применение новых технологий виртуализации в учебном процессе 15m

        Современное развитие IT-инфраструктуры осуществляется форсировано во время которого происходит переход на облачные вычисления с внедрение методов виртуализации.
        Наряду с этим, в настоящее время, эффективное решение крупномасштабных научных задач требует применения высокопроизводительных вычислений, в том числе использования распределенных вычислительных сред (РВС) различного назначения.
        Все эти процессы требует наличия новых кадров, владеющих этими технологиями.
        Основная проблема, с которой сталкиваются при освоении технологий распределённых вычислений — это проведение практических занятий. Известно, что любой процесс обучения состоит из трёх компонентов: словесное общение (лекция, инструктаж и т. д), показ наглядных пособий (демонстрация, просмотр) и практика (практические и лабораторные задания). Именно проведение практических занятий является основной проблемой при освоении технологий распределённых вычислений.
        Для разрешения этого вопроса предлагается создание учебного полигона, который представляет собой шесть одинаковых рабочих станций с операционной системой CentOS 7. После соответствующего «апгрейда» и закупки сетевого оборудования указанные компьютеры будут объединены в отказоустойчивый кластер с помощью Apache Mesos. Выбор указанного программного обеспечения основывается на анализе публикации, тенденциях в сфере виртуализации, самими возможностями программы, а также наличием многочисленных дополнений для него, которые разрабатывает сообщество Open Source, что придают ему дополнительный функционал.
        Каждая задача будет запускаться в отдельном изолированном контейнере, что обеспечит целостность обрабатываемых данных, так как программы из разных контейнеров не могут воздействовать друг на друга. Контейнеры будут реализованы с помощью механизма Docker.
        Наличие полигона с использованием кластера и технологии контейнеризации с ее потенциалом позволят в разы повысить эффективность решения широкого спектра учебно-методических задач, стоящих перед университетом, улучит удобство и качество образования и будет хорошей опорой в осуществлении научно-исследовательской и практической деятельности, а также будет является начальной базой для подготовки/переподготовки действующих IT-специалистов начального звена для разных коммерческих структур.

        Speaker: David Satseradze (Professor, Lecture)
    • 12:30 13:15
      Closing 45m

      Conference Hall or Online - https://jinr.webex.com/jinr/j.php?MTID=m6e39cc13215939bea83661c4ae21c095