Speaker
Description
As in other large particle collision experiments, the topic of distributed event processing and computing is extremely relevant in the BM@N experiment, the first ongoing experiment of the NICA project due to the heavy data flow, the sequential processing of which would take hundreds of years. Only in the last Run of the BM@N experiment about half a petabyte of raw data was collected, and when the experiment reaches the design parameters, the amount of the data will increase by an order of magnitude. To solve this problem and combine all distributed resources of the experiment into a single computing system with a single storage system as well as provide automation of job processing flows, the software-computing architecture has been developed and is being implemented, which includes a complex of software services for distributed processing of the BM@N data flow and will be presented in the report. Furthermore, a set of the online and offline software and information systems has been adapted for mass data production to be a part of the BM@N computing infrastructure. In addition. various auxiliary services will be shown that provide control and quality of the distributed processing of the physics events.