Quantization of generative adversarial networks

6 Jul 2023, 17:15
15m
MLIT Conference Hall

MLIT Conference Hall

Big Data, Machine Learning and Artificial Intelligence Big Data, Machine Learning and Artificial Intelligence

Speaker

Egor Mitrofanov

Description

Generative models have become widespread over the past few years, taking valuable part in content creation. Generative adversarial networks (GANs) are one of the most popular generative model types. However, computational powers required for training stable, large scale and high resolution models can be enormous, making training or even running such models an expensive process. Study of neural network optimization proposes different techniques of lowering required GPU memory, fastening training time and creating more compact models without noticeable loss in generated sample quality. In this research we apply quantization techniques to GAN and estimate results with a custom dataset.

Summary

In this work we provided the results of applying quantization to GAN using a custom high resolution cat muzzle dataset. Using low capacity GPU, models have been trained with and without quantisation and the result of generated image quality has been illustrated. As a quality metric we use FID (Fretchet inception distance) score.

Primary authors

Presentation materials