An Explanatory Analysis of the Geometry of Latent Variables Learned by Variational Auto-Encoders

, ,

NIPS 2017 Workshop on Bayesian Deep Learning (2017) .


Abstract

Variational AutoEncoders are generative models, consisting of two cascading networks: the recognition network and the generative network. Under the framework of variational inference, the original training algorithms of VAEs optimize a lowerbound on the log-likelihood, derived using the Kullback-Leibler divergence. More recent literature focused on improving the log-likelihood using alternative bounds, such as the ones derived from the Rényi divergence and their reformulations in terms of importance sampling. A thorough description of the influence of such bounds on the quality of the latent representation is lacking. Defining what makes a given latent representation better than another is not trivial. Learning adequate such descriptions represents one of the main determinants of the performance of VAEs. Representations in the latent space are reportedly distributed in a coherent way and the sub-manifold of observations appear to be mapped into an affine space. However, the explicit choice of the prior over the latent space remains the only known element in the construction of the geometry of this space. By means of an explanatory analysis, in our work-in-progress paper, we investigate the factors that shape the geometry of the latent space of VAEs. We evaluate the impact of different structural parameters of the model and that of the cost function optimized during training.



Add your rating and review

If all scientific publications that you have read were ranked according to their scientific quality and importance from 0% (worst) to 100% (best), where would you place this publication? Please rate by selecting a range.


0% - 100%

This publication ranks between % and % of publications that I have read in terms of scientific quality and importance.


Keep my rating and review anonymous
Show publicly that I gave the rating and I wrote the review