A Critical Review of Generative Adversarial Networks based on Stability Criteria

Conference: International Joint Conferences on Advances in Engineering and Technology
Author(s): Vishal M. Chudasama, Kishor P. Upla Year: 2018
Grenze ID: 02.AET.2018.1.509 Page: 48-54

Abstract

In the machine learning field, the traditional deep learning models are mostly of discriminative type in which their\ngoal is to discover a map from input layers to output layers. Also, these models require large amount of annotated data for\ntraining. On the other hand, deep generative models (DGMs) provide a new way to learn features effectively from the sample\ndata which do not require the labeled data. Among the many DGMs, generative adversarial networks (GANs) are the\nemerging models for both semi-supervised and unsupervised learning. GANs use a pair of discriminator and generator\nnetworks which are used in competitive process to learn the effective features. However, the implementation of GANs suffers\nagainst the challenging problem of stability of training. This paper discusses the review and challenges of the implementation\nof GANs. We review different GAN models such as deep convolutional GAN (DCGAN), Wasserstein GAN (WGAN),\nWGAN with gradient penalty (WGAN-GP) and boundary equilibrium GAN (BEGAN) which improve the stability of the its\ntraining. The improvement in terms of stability of these GANs is evaluated by conducting the different experiments on the\ncommon database of Fashion-MNIST. Additionally, the mode collapse problem of GAN is tackled using unrolled GAN\nwhich is also reviewed and discussed.

<< BACK

AET - 2018