GAN4 Which Training Methods for GANs do actually Converge? (ICML 2018)
Minhao Jiang (minhaoj2@illinois.edu)
Introduction
Generative Adversarial Networks (GANs) are powerful latent variable models that can be used to learn complex real-world distributions. The recent works show the local convergence of GAN training for absolutely continuous data and generator distributions. However, while very powerful, GANs can be hard to train and in practice it is often observed that gradient descent based GAN optimization does not lead to convergence.
In this paper, the author discussed a counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. The paper also showed that how recent techniques for stabilizing GAN training affect local convergence on the example problem, as WGAN, WGAN-GP, and DRAGAN do not converge on this example. And based on this observation, the paper introduced simplified gradient penalties and prove local convergence for the regularized GAN training dynamics.
Problem Definition
We consider the traditional GAN training objective function as where the common choice is considered in the original GAN paper. For technical reasons we assume that is continuously differentiable and satisifes for all . GANs are usually trained using Simultaneous or Altenating Gradient Descent (SimGD and AltGD), where both of them can be considered as fixed point algorithm that apply some operator where denotes the gradient vector field
Recently, it was shown that local convergence of GAN training near an equilibrium point can be analyzed by looking at the spectrum of the Jacobian at the equilibrium:
- If has eigenvalues with absolute value bigger than 1, the training algorithm will generally not converge to.
- On the other hand, If has eigenvalues with absolute value smaller than 1, the algorithm will converge in sublinear time.
Dirac-GAN
Equipped with these definitions, we can now have the definition of a simple yet prototypical counterexample which shows that in the general case unregularized GAN training is neither locally nor globally convergent.
Defition 1 The Dirac-GAN consists of a (univariate) generator distribution and a linear discriminator . The true data distribution is given by a Dirac-distribution concentrated at 0.
Lemma 2 The unique equilibrium point of the training objective is given by . And the Jacobian of the gradient vector field has two eigenvalues .
Considering the idealized continuous systems in GAN training dynamics, in the previous works, it was assumed that the optimal discriminator parameter vector is a continuous function of the current generator parameters.
Lemma 2.3 The integral curves of the gradient vector field do not converge to the Nash-equilibrium. Every integral curve of the gradient vector field satisfies for all
In this case, unless , there is not even an optimal discriminator parameter for the Dirac-GAN.
And the following theorems showed that in two normal training dynamics of GAN: SimGD and AltGD, both encounter such instabilities. But where do these instabilities come from?
Figure 1: (a): In the beginning, the discriminator pushes the generator towards the true data distribution and the slope increases. (b): When the generator reaches the target distribution, the slope of the discriminator is the largest, pushing it away from the target distribution. This results in the oscillating behavior that will never converge.
Another way to look at it is to consider the local behavior of the training algorithm near the Nash-equilibrium, where there is no incentive for the discriminator to move to the equilibrium discriminator.
Figure 2: Converging properties of different GAN training algorithms using alternating gradient descent. We can clearly see that WGAN and WGAN-GP both do not converge on this example.
Regularization Techniques
A common technique to stabilize GANs is to add instance noise, i.e., independent Gaussian noise, to the data points.
Lemma 3.2 For the Dirac-GAN: When using Gaussian instance noise with standard deviation , the eigenvalues of the Jacobian of the gradient vector field are given by
This theorem also implies that in the case of absolutely continuous distributions, gradient descent based GAN optimization is, under suitable assumptions, locally convergent.
Zero-centered gradient penalties
A penalty on the squared norm of the gradients of the discriminator results in the regularizer
Lemma 3.3 The eigenvalues of the Jacobian of the gradient vector field for the gradient-regularized Dirac-GAN at the equilibrium point are given by
Like instance noise, there is a critical regularization parameter that results in a locally rotation free vector field. And in this case, simultaneous and alternating gradient descent are both locally convergent.
The analysis suggests that the main effect of the zero-centered gradient penalties on local stability is to penalize the discriminator for deviating from the Nash-equilibrium. Then we can derive the following gradient penalties.
Figure 3: Measuring convergence for GANs is hard for high dimensional problems, because we lack a metric that can reliably detect non-convergent behavior. So only experiments on 2D Problems were conducted.
Conclusions: In this paper, the authors analyzed the stability of GAN training on a simple yet prototypical example and showed that (unregularized) gradient based GAN optimization is not always locally convergent. And the authors extended the local convergence with simplified zero-centered gradient penalties under suitable assumptions.
The relativistic discriminator: a key element missing from standard GAN (ICLR '18)
In standard generative adversarial network (SGAN), the discriminator D estimatesthe probability that the input data is real. The generator G is trained to increase the probability that fake data is real. In this paper, the authors argue that it should also simultaneously decrease the probability that real data is real because
- This would account for a priori knowledge that half of the data in
the mini-batch is fake. - This would be observed with divergence minimization.
- In optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs.
Introduction
Problem Definition GANs can be defined generally in terms of the discriminator in the following way
where are scalar-to-sclar functions. is the distribution of the real data.
Integral Probability Metrics (IPM):
IPMs are statistical divergences represented mathematically as
IPM-based GANs can be defined using euqation 1 and 2 assuming and and .
In this paper, the authors argued that the key missing property of SGAN is that the probability of real data being real should decrease as the probability of fake data being real increase.
Figure 1: Expected discriminator output of the real and fake data for the direct minimization of the JSD, actual training of the generator to minimize its loss function, and ideal training of the generator to minimize its loss function.
SGAN completely ignores the a priori knowledge that half of the mini-batch samples are fake. And IPM-based GANs implicitly account for the fact that some of the samples must be fake because they compere how realistic real data is compared to fake data.
In SGAN, the discriminator loss function is equal to the Jensen-Shannon divergence. Thus, it can be represented as solving the following maximum problem
In terms of the gradient steps of SGAN and IPM-based GANs,
In IPMs, oth real and fake data equally contribute to the gradient of the discriminator’s loss function.However, in SGAN, if the discriminator reach optimality, the gradient completely ignores real data, which means if does not indirectly change when training the discriminator to reduce ,the discriminator will stop learning what it means for data to be “real” and training will focus entirely on fake data.
The discriminator estimates the probability that the given real data is more realistic than a randomly sampled fake data. When the discriminator is defined only on . Then we have the discriminator and generator loss functions of the Relativistic Standard GAN
And for discriminator defined as
In RGANs, is influenced by fake data , thus by the generator. This means that in most RGANs, the generator is trained to minimize the full loss function envisioned rather than only half of it.
Although the relative discriminator provide the missing property that we want in GANs (i.e. influencing ), its interpretation is different from the standard discriminator. Rather than measuring “the probability that the input data is real”, it is now measuring “the probability that the input data is more realistic than a randomly sampled data of the opposing type.
So we define that
where
Figure 2 Case studies on some pictures explaining why we should use relative probability in RGAN.
Figure 3
Figure 4: Experimental results of different GAN loss functions on CIFAR-10 datasets. Measured with FID scores.
Conclusions: In this paper, the authors proposed the relativistic discriminator as a way to fix and improve on standard GAN. The authors further generalized this approach to any GAN loss and introduced a generally more stable variant called RaD. The results suggests that relativism significantly improve data quality and stability of GANs at no computational cost.