Outline
- Abstract
- Related Works
- Main idea
- Discussion
Abstract
Generative Adversarial Networks is a new framework for estimating generative models via an adversarial process, which includes two models: a generative model G that captures the data distribution, and a discriminative model D that estimatesthe probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.
Related Work
Generative Model
Gutmann, M. and Hyvarinen, Noise-contrastive estimation: A new estimation principle for unnormalized statistical models, 2010
Energy Based Model
- Generative Problem
- Learn Probability Distribution
- From Statistical Mechanics
- Any Probability Distribution can be represented by Energy
- Main Idea Correct sample corresponds low energy
Boltzmann Machine
- Based on Simulate Anneal
- Training
- Randomly Flip and RGD
- Generate
- Select a neural randomly and compute output
- Restricted Boltzmann Machine ( RBM )
- Bipartite graph
- Deep Boltzmann Machine ( DBM )
- Deep Bipartite graph
- Salakhutdinov, Deep Boltzmann machines, 2009
- Tieleman, Training restricted Boltzmann machines using approximations to the likelihood gradient, 2008
- Bengio, Generalized denoising auto-encoders as generative models, 2013
- Breuleux, Quickly generating representative samples from an RBM-derived process, 2011
Noise-contrastive estimation
Gutmann, M. and Hyvarinen, Noise-contrastive estimation: A new estimation principle for unnormalized statistical models, 2010
Main Idea
- Using NN to represent Probability Distribution
- Like DBN
- Using NN to discriminate Noise or Correct
- Like NCE
- Generator
- From z (initial vector) to Image
- Simple Full-Connect NN
- Discriminator
- From Image to 0/1
- Using simple network in DP,Like CNN
- Assume all sample from real is true
- Assume all sample from G is false
- Loss Function
- D(x) stands for “x is real”
- Discriminative Loss: Log D(x)
- “real is not good”
- Generative Loss: log(1 - D(G(z)))
- “real is good”
Discussion
Optimize Algorithm
Mathematical Proof
Reference
-
Previous
Deep Bilateral Learning for real-time image enhancement(Siggraph 2017) -
Next
Auto-Encoding Variational Bayes