We present our results on updating the middleware of Russian GRID sites to be able to continue processing ALICE data in the future, including the HL stage of the Large Hadron Collider operation. We will share our experience with one of the GRID sites and discuss some practical cases of scaling the updated middleware to other Russian sites in 2022-2023.
"This work is supported by the ...
Every year the ATLAS experiment produces several billions event records in raw and other formats. The data are spread among hundreds of computing Grid sites around the world. The EventIndex is the complete catalogue of all ATLAS real and simulated events, keeping the references to all permanent files that contain a given event in any processing stage; its implementation has been substantially...
The CREST project is a new realization of the Conditions DB for the ATLAS experiment, using the Rest API and JSON support. This project simplifies the conditions data structure and optimizes data access.
CREST development requires not only a client C++ library (CrestApi) but also various tools for testing software and validating data. A command line client (crest_cmd) was written to get a...
The P-BEAST is a highly scalable, highly available and durable system for archiving monitoring information of the trigger and data acquisition (TDAQ) system of the ATLAS experiment at CERN. The Grafana plugin communicate with P-BEAST by the Rest API and JSON support. Grafana as a multi-platform open source analytics and interactive visualization web application is continuously developed with...
The BM@N 8th physics run using Xenon ion beams was successfully completed in February 2023, resulting in the recording of approximately 550 million events. They were recorded in the form of 31306 files, with a combined size exceeding 430TB. However, the reconstruction of these files demands significant computing resources, which is why a distributed infrastructure unified by DIRAC was chosen...
The Configuration Information System (CIS) has been developed for the BM@N experiment to store and provide data on the configuration of the experiment hardware and software systems while collecting data from the detectors in the online mode. The CIS allows loading configuration information into the data acquisition and online processing systems, activating the hardware setups and launching all...
The high-precision coordinate detectors of the tracking system in the BM@N experiment are based on microstrip readout. The complete tracking system designed for the latest xenon physics run (winter of 2023) consists of three parts: an ion-beam tracker and two trackers (inner and outer) for charged particle registration after primary interactions. The report reviews the features and...
Machine Learning methods are proposed to be used in more and more high energy physics tasks nowadays, in particular for charged particle identification (PID). It is due to the fact that machine learning algorithms improve PID in the regions where conventional methods fail to provide good identification. This report gives results of gradient boosted decision tree application for particle...
Event reconstruction in the SPD (Spin Physics Detector) experiment in the NICA mega-science project presents a significant challenge of processing a high data flow with limited valuable events. To address this, we propose novel approaches for unraveling time slices. With a data rate of 20 GB/sec and a pileup of about 40 events per time slice, our methods focus on efficient event reconstruction...
In accordance with the technical design report, the SPD detector, which is being built at the NICA collider at JINR, will produce trillions of physical events per year, estimated at dozens of petabytes of data, which puts it on a par with experiments at the Large Hadron Collider. Although the physical facility is under construction, these figures must be taken into account already now, at the...
Particle tracking is critical in high-energy physics experiments, but traditional methods like the Kalman filter cannot handle the massive amounts of data generated by modern experiments. This is where deep learning comes in, providing a significant boost in efficiency and tracking accuracy.
A new experiment called the SPD is planned for the NICA collider, which is currently under...
Мега-сайенс проект NICA задаёт высокую планку к вычислительным ресурсам и системам хранения и обработки данных. Участники коллабораций MPD, BM@N, SPD при выполнении расчётов активно задействуют различные вычислительные ресурсы ОИЯИ: МИВК Tier-2, СК Говорун, Облако JINR-Cloud, вычислительный кластер NCX. При выполнении расчётов применяется классическая иерархия систем хранения и обработки...
Современные научные исследования не могут существовать без крупных вычислительных систем, которые способны хранить большие объемы данных и обрабатывать их в относительно короткие сроки. К таким системам относятся распределенные центры сбора, хранения и обработки данных (РЦОД).
Распределенные системы имеют сложную структуру и включают в себя множество разнообразных компонент, поэтому для...
Extensive studies, in the field of high temperature plasma and controlled thermonuclear fusion were started in 50th of the last century. Main goal of these studies was the creation of power source runs on relatively cheap hydrogen isotope Deuterium heated up to hundred million degrees in the conditions where it will be possible to obtain thermonuclear reaction.
In the beginning, the simple...
In the BM@N experiment, a xenon heavy ion beam with an energy of 2.7 GeV/nucleon interacts with a cesium target, generating many secondary particles π, μ, p, n, γ, e, d, α, K, etc. After computer processing of the data from the detectors used in the experiment, we obtain a series of images of the tracks of emerging particles. We processed four of them using the Gwyddion program and calculated...
We give a presentation of our polymorphic non-abelian package of 3D vectors and matrices for high-speed algorithms intended for trigger applications in Particle Physics. The package is part of our "Math-on-Paper" C++ concept - of fielding solutions that are as close as possible in code to actual scientific on-paper computations, known that often it is nearly impossible to bring paper equations...
Презентация посвящённа созданию и развитию вычислительно центра института SAPHIR (Millennium Institute for Subatomic Physics at the High Energy Frontier, Santiago, Chile).
The talk provides overview of implementation of the Acceptance Test Driven Development (TDD) paradigm
for quality control/enhancement of the reconstruction engine in MPD offline data analysis framework MPDRoot.
The necessary changes in the codebase architecture and the ease-of-use of the TDD environment,
defining the pivotal success factor: the pool of possibilities (potential) for the...