THE FUTURE IS HERE

Ian Goodfellow: Generative Adversarial Networks (NIPS 2016 tutorial)

Generative adversarial networks (GANs) are a recently introduced class of generative models, designed to produce realistic samples. This tutorial is intended to be accessible to an audience who has no experience with GANs, and should prepare the audience to make original research contributions applying GANs or improving the core GAN algorithms. GANs are universal approximators of probability distributions. Such models generally have an intractable log-likelihood gradient, and require approximations such as Markov chain Monte Carlo or variational lower bounds to make learning feasible. GANs avoid using either of these classes of approximations. The learning process consists of a game between two adversaries: a generator network that attempts to produce realistic samples, and a discriminator network that attempts to identify whether samples originated from the training data or from the generative model. At the Nash equilibrium of this game, the generator network reproduces the data distribution exactly, and the discriminator network cannot distinguish samples from the model from training data. Both networks can be trained using stochastic gradient descent with exact gradients computed by maximum likelihood.

Topics include:
– An introduction to the basics of GANs.
– A review of work applying GANs to large image generation.
– Extending the GAN framework to approximate maximum likelihood, rather than minimizing the Jensen-Shannon divergence.
– Improved model architectures that yield better learning in GANs.
– Semi-supervised learning with GANs.
– Research frontiers, including guaranteeing convergence of the GAN game.
– Other applications of adversarial learning, such as domain adaptation and privacy.