Study of the VH(bb) production by MVA methods

6 Jul 2022, 17:50
15m
Presentation Track 1. Machine Learning in Particle Astrophysics and High Energy Physics Session 1. ML in Particle Astrophysics and High Energy Physics

Speaker

Faig Ahmadov (JINR & IP ANAS)

Description

Taking into account that, at a Higgs boson mass of 125 GeV, the probability of its decay into bb is greater than the sum of the probabilities of all other decay channels, this channel makes a great contribution to the study of the Higgs boson. A more suitable channel for the production of the Higgs boson for studying it in bb decay is associative production with a vector boson. It was in this channel that the decay of the Higgs boson into a pair of b-quarks was observed for the first time. Therefore, the VH(bb) process is a very important channel for studying the properties of the Higgs boson.
In LHC experiments, multivariate VH(bb) analysis began to be used after 2013, before that, cut-based analysis was used. As a multivariate analysis, among the multivariate techniques, the Boosted Decision Tree (BDT) in ATLAS and the Deep Neural Network (DNN) in CMS were used. In this work, these two methods were compared to find out which one can achieve the best performance. The 2L channel (ZH(bb), where Z decays into 2 charged leptons) from the three lepton channels (0L, 1L, 2L) was chosen, which includes the VH(bb) analysis. The list of input variables for BDT or DNN is similar to those used in the analysis in the ATLAT experiment. Up to 0.4 million signals and the same number of background events were used for training. The settings used in the ATLAS analysis, which has the best performance, were chosen to tune the BDT hyperparameters. Various number of events (2K, 5K, 10K, 0.1M, 0.2M and 0.4M) are trained and different settings for NN are obtained, providing performance that exceeds that of BDT. It turns out that for any number of training events, it is possible to find corresponding NN settings with better performance than BDT. The only problem with NN training is that it is computationally intensive compared to BDT.

Agreement to place Participants agree to post their abstracts and presentations online at the workshop website. All materials will be placed in the form in which they were provided by the authors

Primary author

Faig Ahmadov (JINR & IP ANAS)

Presentation materials