Towards the Use of Gaussian Graphical Models in Variational Autoencoders
Alexandra Pește, Luigi Malagò
Variational autoencoders have become one of the most powerful tools for approximate inference in the context of deep learning. This paper investigates the use of Gaussian Graphical Models for the approximation of the posterior distribution, in order to model correlations between latent variables. We present two examples: a chain model and a regular grid, which are able to capture correlations not represented by the independence model. Both models allow efficient stochastic backpropagation, by guaranteeing a computational complexity linear in the number of non-zero entries of the precision matrix. We argue that, differently from standard inference based on graphical models, in the context of VAE the topology of the graph is invariant to permutation of the variables in the graphical model in presence of fully-connected neural networks, due
to the interchangeability of the output nodes. Our approach can be extended to the conditional distribution of the observed variables.