Towards the Use of Gaussian Graphical Models in Variational Autoencoders


ICML 2017 Workshop on Implicit Models (2017) .


Variational autoencoders have become one of the most powerful tools for approximate inference in the context of deep learning. This paper investigates the use of Gaussian Graphical Models for the approximation of the posterior distribution, in order to model correlations between latent variables. We present two examples: a chain model and a regular grid, which are able to capture correlations not represented by the independence model. Both models allow efficient stochastic backpropagation, by guaranteeing a computational complexity linear in the number of non-zero entries of the precision matrix. We argue that, differently from standard inference based on graphical models, in the context of VAE the topology of the graph is invariant to permutation of the variables in the graphical model in presence of fully-connected neural networks, due
to the interchangeability of the output nodes. Our approach can be extended to the conditional distribution of the observed variables.

Add your rating and review

If all scientific publications that you have read were ranked according to their scientific quality and importance from 0% (worst) to 100% (best), where would you place this publication? Please rate by selecting a range.

0% - 100%

This publication ranks between % and % of publications that I have read in terms of scientific quality and importance.

Keep my rating and review anonymous
Show publicly that I gave the rating and I wrote the review