One of the key elements we must verify before joining a Grid infrastructure is network connectivity. A high bandwidth is required to move a large volume of data, and this must be accompanied by optimized routing.
Based on our previous experience with the Grid in ALICE (WLCG), AUGER (EGI), and EELA (Europe-Latin America), optimizing the path is challenging when Grid resources are distributed...
Athena is the ATLAS software framework that manages nearly all ATLAS production workflows. Most of these workflows rely on accessing data in the conditions database. CREST is a new conditions database project designed for production use in Run 4. Its primary goals are to evolve the data storage architecture, optimize access to conditions data, and enhance caching capabilities within the ATLAS...
The SPD experiment will have to collect large amount of data: up to trillion events (records of a collision results) will have to be stored and analyzed, producing around ten petabytes yearly. A similar amount of simulated particle collisions for use in detector data analysis will be produced. This information will be distributed between a number of computing sites on a various storage...
Baikal-GVD is a gigaton-volume neutrino observatory under construction in Lake
Baikal. Its data processing software consists of a core part and a managing
part. The former is a set of C++ programs built upon the BARS (Baikal Analysis
and Reconstruction Software) framework, which provides a basis for
implementing all data processing stages. The Python-based management layer
organizes these...
The Spin Physics Detector is a universal detector to be installed in the second interaction point of the NICA collider to study the spin structure of the proton and deuteron. Each large HEP experiment needs it's own applied software for handling generation, simulation, reconstruction and physics analysis tasks. Due to the commonality of such tasks among different experiments dedicated...
The SPD (Spin Physics Detector) facility at the NICA accelerator complex at JINR is under construction. In addition to the physics facility itself, the software for the future experiment is also being developed. There is already a constant demand for sufficiently large-scale data productions to simulate physical processes in a future experiment. To facilitate their implementation, MLIT staff...
The Data Management System (DMS) design for BM@N, a fixed target experiment of the NICA (Nuclotron-based Ion Collider fAcility) is presented in this article. The BM@N DMS is based on the DIRAC Grid Community. This system provides all the necessary tools for secure access to the experiment data. The key service of the system is the File catalog, presenting all the distributed storage elements...
This work focuses on the development of a method for automating the processes of building, testing, and deploying application software for the distributed data processing system of the SPD experiment at the NICA collider. The study involves a systematic analysis of the existing development process, identifying key issues such as the high labor intensity of manual operations and the lack of a...
The report presents the design of the Data Quality Monitoring (DQM) system for the BM@N experiment of the NICA project, including a description of the system's objectives, a brief overview of such systems that operate in the CERN LHC experiments, and general approaches to creating the systems. The features of the BM@N experiment are analyzed, such as the rate and volume of data received during...
In 2025, the 9th data-taking run is scheduled for the BM@N experiment. Since February 2023, when data from the 8th run were acquired, the BM@N data processing has been carried out using a geographically distributed heterogeneous infrastructure based on the DIRAC Interware software. For the 9th run, an automated task-launching methodology has been developed. The processing is triggered by the...
Operational experience with the Hyperloop train system is presented. This framework facilitates organized grid data analysis in ALICE at the Large Hadron Collider (LHC). Operational since LHC Run 3, the system enables efficient management of distributed computing resources via a web-based interface, optimizing workflow execution and resource utilization. Hyperloop structures analyses as...
The visualization of experimental data plays a vital role in high-energy physics, enabling intuitive interpretation and analysis of particle collision events. Advancements in web technologies have significantly influenced the development of interactive 3D event displays, improving accessibility and performance. This article examines the implementation of modern tools such as React, Bun,...
Budker Institute of Nuclear Physiscs has several prospects on future experiments. Ranging from large complexes such as well known Super Charm-Tau (SCT) factory, or reัent project of the detector and VEPP-6 accelerator (there is no official name for the detector yet), to small setups for detector studies.
The project of the VEPP-6 is similar to Super Charm-Tau factory. It is a...
Modern high-energy physics (HEP) experiments generate and store vast volumes of data, which users access through complex and irregular patterns. Efficient data management in such environments requires accurate forecasting of dataset popularity to optimize storage, caching, and data distribution strategies. In this work, we propose an approach for predicting future dataset access patterns using...
The SPD Online Filter is a specialized data computing facility designed for the high-throughput, multi-step processing of data from the SPD detector. Its primary objective is real-time data reduction to minimize storage requirements and enable downstream analysis. The system combines a compute cluster with middleware that abstracts hardware complexity from the applied software.
This report...
The article demonstrates the implementation of the Workflow Management System (WfMS) for the high-performance computing system for preliminary processing of SPD physical experiment data - SPD Online Filter. The capabilities of each WfMS microservice are shown and the key points of the system creation are identified. The talk also defines further plans for upgrading the system.
Pilot applications have become essential tools in distributed computing, offering mechanisms for dynamic workload execution and efficient resource management. They are commonly employed in high-performance computing and large-scale scientific experiments due to their flexibility and scalability. Despite their broad adoption, the field still lacks a standardized abstraction and consistent best...
The Baikal-GVD Deep-Underwater Neutrino Telescope is a cubic-kilometre detector currently being constructed in Lake Baikal. It generates about 100 GB of data daily. To obtain reliable high-quality data and to ensure stable operation of the detector, the online software has been developed. In the talk, we review the main components, architecture, principles of the software for data acquisition,...
The Spin Physics Detector (SPD) is being built as a part of NICA mega-science facility in the Joint Institute for Nuclear Research. A design feature of the detector is the absence of a classical trigger system that allows event selection. This leads to the need to collect the entire set of generated signals from the subsystems. In this regard, the data flow from the detector can reach 200...
Active work continues on the creation of the SPD (Spin Physics Detector) facility at the NICA accelerator complex, which is located at the Joint Institute for Nuclear Research (JINR). Since the facility will collect a large amount of data, data processing and storage will be carried out in a distributed computing environment. In this regard, there is a need for specialized software for...
Event Metadata System (EMS) of the BM@N experiment, the first experiment of the NICA project is an important part of the BM@N software ecosystem. Its latest version containing core necessary functions has been recently deployed in the JINR infrastructure. The Event Catalogue of the EMS has been filled with nearly 700M events collected by BM@N during its first physics run completed in February...
Digital twins (DT) of distributed data acquisition, storage and processing centers (DDC) can be used to improve the technical characteristics of computing systems, make decisions on the choice of equipment configurations as part of the task of scaling and resource management. The report discusses a method for creating and using DDC digital twins. A distinctive feature of the method is the...
The Spin Physics Detector (SPD) experiment is being built at the LHEP site of the Joint Institute for Nuclear Research, including a full software suite for every stage of data processing. Based on the Technical Design Report, the collaboration anticipates high luminosity, which will generate large volumes of data. To handle this workload at scale, we must either launch hundreds of nearly...
The ATLAS EventIndex is the complete catalogue of all ATLAS real and simulated events, keeping the references to all permanent files that contain a given event in any processing stage. The Event Picking Service (EPS) is a part of the EventIndex project. It automates the procedure of finding the locations of large numbers of events, extracting and collecting them into separate files. It...
In modern scientific computing, optimizing software performance is critical, especially for resource-intensive processes such as event reconstruction in high-energy physics experiments. The SpdRoot package, based on FairRoot, faces challenges with slow event processing, increasing the needs in computing time and resources. This study is aimed to identify and eliminate bottlenecks in SpdRootโs...