Artificial Neural Networks in High Energy Physics data processing (succinct survey) and probable future development

6 Jul 2023, 09:00
30m
MLIT Conference Hall

MLIT Conference Hall

Plenary Plenary

Speaker

Andrey Shevel (PNPI, ITMO)

Description

Artificial Neural Networks in High Energy Physics data processing (succinct survey) and probable future development


Abstract

The rising the role of Artificial Neural Networks (ANN) as part of machine learning/deep learning (ML/DL) in High Energy Physics (HEP) and related areas can be seen last decades. Several reasons for rising the role of ANN were observed. The comparison the ANN usage results with known rules-based data analysis results have briefly been presented. It is important that ANN usage practices have many peculiarities including the preparation process of the data for training the model, which is implementing by ANN, testing the model, finding, and comparing several models, activation function choice, loss function choice, etc. The number of ANN models with variants might be estimated by hundreds. Most popular model architectures are briefly described. In this context there are connected topics: an exchange models in between researchers, an use already trained ANN, automation of ANN developing process, etc. Among ANN main problems the training speed up and the ANN result interpretability are observed. Many theoretical aspects of ANN were already explained, but presumably a lot of theories yet have to be developed.


The funding mainly government and partly private agencies support trend towards open access to experimental data in research laboratories over the past decade. This trend has been driven by several factors: the increasing importance of data sharing and collaboration in scientific research, rapid progress in ANN design, as well as the development of new technologies and platforms for data sharing and analysis. It has been recognized that it is easier to access data with ANN if the data satisfies the requirements of the Findable, Accessible, Interoperable, Reusable (FAIR) principles. A lot of new experimental data is expected, which will require the analysis with ANN from already running experiments and/or from those will be launched in the coming years. Naturally new experimental data will require new larger ANN architectures. Known large scale general purpose ANN so called “foundation models” show the benefits and risks of using “foundation models”.


Finally, the idea of the development large scale ANN “foundation model” dedicated to HEP and related areas has been suggested. Such ANN can presumably be trained on scientific data distributed around variety of physics experiments. It is assumed those data has to satisfy FAIR principles. The trained ANN can be used for deep extensive data analysis. The possible synergetic effects of above ANN “foundation model” supported by advanced computing tools have shortly been described.

Summary

Submitted material is abstract.

Primary author

Andrey Shevel (PNPI, ITMO)

Presentation materials