Generative adversarial networks

Generative adversarial networks are relatively new branch of unsupervised machine learning, implemented by a system of two neural networks. It was introduced by Ian Goodfellow et al in 2014.

Overview

A generative model starts with random pixels as the initial input and generates output data that is then compared to training data sets by the other, adversarial process, usually implemented by a standard convolutional neural network.[1] The generator is trying to adjust parameters so that the training data and generated data can not be distinguished from each other anymore by the discriminator model. The goal is to find a setting of parameters that makes generated data look like the training data to the discriminator network.[2]

Improving sample diversity

Too narrow distribution of generated data has been solved by Tim Salimans et al at OpenAI in 2016.[3][4]

Application

GANs can be used to produce high quality samples of photorealistic images for the purposes of visualizing new interior/industrial design, shoes, bags, and clothing items or items for computer games scenes. These networks have been reported to be used heavily by Facebook, too.[5]

References

  1. Goodfellow, Ian J.; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks". arXiv:1406.2661Freely accessible [stat.ML].
  2. "Generative Models". openai.com. Retrieved April 7, 2016.
  3. Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). "Improved Techniques for Training GANs". arXiv:1606.03498Freely accessible [cs.LG].
  4. "An introduction to Generative Adversarial Networks (with code in TensorFlow)". blog.aylien.com. August 24, 2016. Retrieved October 25, 2016.
  5. Greenemeier, Larry (June 20, 2016). "When Will Computers Have Common Sense? Ask Facebook". Scientific American. Retrieved July 31, 2016.


This article is issued from Wikipedia - version of the 11/7/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.