AR2 Pixel recurrent neural networks
Dachun Sun (dsun18@illinois.edu)
The Ultimate Goal
Generative models that could fit a distribution from samples and then generate more examples from it recently get a staggering development. Many generated images and audio clips are of amazing quality and realism. To be formal, given a random variable , we would like to fit an approximate distribution , where is some discretization.
Most simply, this problem could be solved by minimizing the Kullback-Leibler (KL) divergence, essentially pulling the approximate and the real distribution close. . However, obviously, this is only tractable when the space is small and low-dimensional.
Therefore, the development and the capacity of the state-of-the-art generation models are largely built upon the fundamental advances in autoregressive density estimations1, variational inference2, and generative adversarial networks3. Let us now look at how they approach this goal, and what are there common limitations.
What is Already There
The core problem is that to model a high-dimensional joint density distribution requires exponentially many parameters as the dimension going up. The following methods are different approaches to circumvent this issue by making assumptions, simplifications, or viewing it from another perspective.
Autoregressive Models (ARs)
Autoregressive models typically factorize the joint distribution by the product law, and imposes some conditional independence assumptions to reduce the number of conditionals needed. The following formula explains them all. is a permutation function of dimensions, which is included in the formula to make the indices general.
The model is usually straightforward, but usually there are some ordering issue. Also, the autoregressive nature tends to make generation slow.
Variational Autoencoders (VAEs)
Another perspective is to represent as the marginalization over a latent random variable . Then with the relation below, maximizing the evidence lower bound makes the approximate close to .
VAEs are straightforward to implement and optimze, and efficient at generation and capturing structures in high-dimensional spaces. However, VAEs often miss fine-grained details.
Generative Adversarial Networks (GANs)
We could also tackle this problem from another perspective of a two-player zero sum game. We have two players, a generator and a discriminator . The generator tries to generate fake example from distribution that mimics the true distribution, and the discriminator tries to distinguish fake examples from real data points. The objective then could be written as the following,
| Image Credits to Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch) |
This task is essentially minimizing the Jenson-Shannon divergence, still some function of KL-divergence. This model is infamous to train stably, and the initiation of the model training is also hard.
Hail to KL-divergence
![]() |
|---|
| Image Credits to Understanding Cross-entropy for Machine Learning |
As we can see, all the state-of-the-art methods relies on the KL-divergence one way or another. Even the GANs are in effect minimizing some divergence deeply related to KL-divergence. However, it is known having some problem catching the low probability tails in the density function because it is essentially the expected deviation.
But is There Another Choice?
Sure. All above methods try to approximate the density function directly. Why can’t we approximate other function deeply related to such as the cumulative distribution function (CDF) or inverse CDF. In order to achieve this goal, let us look at what tools we already have.
Quantile
Let be a random variable with CDF , the -th quantile of is given by,
This essential means that we need to find a sample point so that there are portion of the data points lies below the . To make the example more concrete, let us consider a Gaussian random variable , and 0.1-th quantile , 0.9-th quantile .
Quantile Regression
What could quantile do? One step advance from linear regression. Given a dataset , and a quantile , approximate the conditional quantile function at : , under the loss function
where is the error.
As we can see, if we fix a , the formulation is essentially the same as linear regression other than the special loss function. How is this regression useful? Let us look an example.
Suppose you ordered UberEats, and you have the dataset of history delivery data between distance and delivery time. Now you need to give a time range estimate given the distance that covers 80% of the customers’ delivery time. We could fit a 0.1-th and a 0.9-th regression model and give a range between them.
Quantile Loss
Take a look again at the expression of quantile loss, we could observe that the penalty for underestimation/overestimation is different, depending on . If we look at the loss function at , we have
For underestimation (), the penalty is 0.1, but for overestimation, the penalty is -0.9. If the regressor is at the middle of the data blob, then how should it move to minimize the loss? Obviously, to have more underestimation is good, so the regressor will move down to the red line, which is essentially how quantile regression works.
If the quantile loss at any is small though, we could conclude that we captured almost all the details of the distribution, even if the density is low. So could this be our substitute for the KL-divergence?
Modeling from Another Perspective
So, instead of modeling the density directly, we could approximate the inverse CDF. This is almost equivalent because we could deduct the density estimate from the inverse CDF.
Similar to approximating density functions, we have to decide on a factorization of the Quantile function (inverse CDF) in high-dimensional space to make it tractable.
- If the CDF is of a single variable , the we need comonotonic property to ensure invertibility (obvious because there is no negative probability, CDF must be non-decreasing along any dimension, which is what comonotonic property essentially implies). is a very strong assumption, and could hardly be used broadly.
- On the other hand, if we use a separate for each component, , we are assuming independence between all components, which is unrealisticly restrictive for many domains.
So we do the same, factorize the CDF, and make some assumptions on conditional independence.
Let’s Reparameterize on Sampled Quantiles
Naturally, since we are approximating the quantile function (inverse CDF), we choose the quantile loss to minimize. However, does this loss really leads to some divergence metric between and ? In other words, are we doing the correct thing, eventually approximating the density function?
Validity
Let us compute the expected quantile loss over the distribution for a quantile , following these steps:
- Expand the definition of Expectation.
- Split the first integral, and merge one of them with the second.
- Split the first integral again, and evaluate the second according to the definition of Expectation again.
- Evaluate the first integral by the definition of CDF, and take the second integral by part, where and .
- Cancel the first two term, and we arrived at the final expression.
Obviously, is the true quantile function, so it minimizes the expected quantile loss over . Let us get an expression on relative difference,
Suppose we have a distribution , whose quantile function is , then the expected relative loss over all 's are the following. Finally, we observed that there exists some metric on two distributions, called the Quantile Divergence.
Quantile Divergence
This means that modeling the quantile function with quantile loss does lead to an eventual approximation to the true distribution. Let us have a closer look at how quantile divergence measures the difference between two distributions.
| Correction: The integrand should be , credits to 4. |
For a given , we can see the integral evaluates to a blue area, and we are summing them over all 's. Therefore, it is obvious that this integral disappears if two quantile function match exactly for every , and this proves the statement that quantile loss will not miss any low density region of the distribution.
Unbiased Estimate
Finally, if we take the gradient over the expected relative quantile loss, we are getting an unbiased estimate to the gradient of quantile divergence. Once again, this proves that the new scheme works, leading to an approximation to the true distribution.
Source of Randomness
We know that, specfically for VAEs, there is a reparameterization trick that separates the source of randomness to a standard Gaussian distribution. Now we models the quantile function, how do we get samples from it? Where is the source of randomness now?
It is . Since quantile functions are essentially inverse CDFs, taking uniform random , and feeds it to the model will give us a sample back. Here is an illustration on how it works.
Results
Gated PixelCNN5 is a model that we try to modify. The original formulation have a location-dependent conditioning variable, which will not be used to condition on the random source . Therefore, the modified version PixelIQN will produce pixel values directly, instead of outputing a discreet distribution over 256 levels of RGB values each.
| PixelIQN Architecture, similar to Gated PixelCNN with taking the place of location-dependent conditioning. Credits to 4. |
The dataset used are CIFAR-10 and ImageNet 32x32, with metrics including Fréchet inception distance (FID) (lower is better) and Inception Score (IS) (higher is better).
Training and Performance
| Training curves. Dotted lines correspond to models trained with class-label conditioning. Credits to 4. |
| Inception score and FID for CIFAR-10 and ImageNet. PixelIQN(1) is the small 15-layer version of the model. Models marked * refer to class-conditional training. Credits to 4. |
Samples
| CIFAR-10: Real example images (left), samples generated by PixelCNN (center), and samples generated by PixelIQN (right). Credits to 4. |
| ImageNet 32x32: Real example images (left), samples generated by PixelCNN (center), and samples generated by PixelIQN (right). Credits to 4. |
Inpainting
| Small ImageNet inpainting examples. Left image is the input provided to the network at the beginning of sampling, right is the original image, columns in between show different completions. Credits to 4. |
Class Conditioning
| Class-conditional samples from PixelIQN. Credits to 4. |
Conclusion
Authors recognized that most current state-of-the-art models is built on top of development of Autoregressive modesl, VAEs, and GANs, which they all employ KL-divergence as the measure between two distributions. Now, we use the quantile function and quantile loss to achieve the same tasks, which may be more suitable for certain tasks that cares for low density region of the distrbution.
Although this new approach will not reduce training/inference time, it signifies an important perspective to look at the density estimation.
van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. Pixel recurrent neural networks. In Proceedings of the International Conference on Machine Learning, 2016c. ↩︎
Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. ↩︎
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014. ↩︎
Ostrovski, Georg, Will Dabney, and Rémi Munos. Autoregressive quantile networks for generative modeling. In International Conference on Machine Learning. PMLR, 2018. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎
van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., and Kavukcuoglu, K. Conditional image generation with PixelCNN decoders. In Advances in Neural Information Processing Systems, pp. 4790–4798, 2016b. ↩︎
