Architecture of a generative adversarial network and preparation of input data for modeling gamma event images for the TAIGA-IACT experiment

Jul 5, 2021, 4:15 PM
407 or Online -

407 or Online -

Sectional reports 9. Big data Analytics and Machine learning Big data Analytics and Machine learning.


Yulia Dubenskaya (SINP MSU)


Very-high-energy gamma ray photons interact with the atmosphere to give rise to cascades of secondary particles - Extensive Air Showers (EASs), which in turn generate very short flashes of Cherenkov radiation. This flashes are detected on the ground with Imaging Air Cherenkov Telescopes (IACTs). In the TAIGA project, in addition to images directly detected and recorded by the experimental facilities, images obtained as a result of simulation are used extensively. The problem is that the computational models of the underlying physical processes (such as interactions and decays of a cascade of charged particles in the atmosphere) are very resource intensive, since they track the type, energy, position, direction and time of arrival of all secondary particles born in EAS. On average, using such computational methods, one can get only about 1000 images per hour. This can result in computational bottleneck for the experiment due to the lack of model data. To address this challenge, we applied a machine learning technique called Generative Adversarial Networks (GAN) to quickly generate images of gamma events for the TAIGA project. The initial analysis of the generated images showed the applicability of the method, but revealed some features that require special preparation of the input data. In particular, it was important to teach the network that in our case gamma images are elliptical, and the angle between the image axis and the direction to gamma-ray source is close to zero. In this article we provide an example of a GAN architecture suitable for generating images of gamma events similar to those obtained from IACTs of the TAIGA project. Testing the results using third-party software showed that more than 95% of the generated images were found to be correct. And at the same time, the generation is quite fast: after training, the generation of 4000 events takes about 10 seconds. In the article, we also discuss the possibility of improving the generated images by preprocessing the input data.


We provide an example of a generative adversarial network architecture suitable for generating model images of gamma events for the TAIGA project. We are also discussing the possibility of improving such images by preprocessing the input data.

Primary author


Presentation materials