Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neighbor Embedding Variational Autoencoder

Published 21 Mar 2021 in cs.LG and stat.ML | (2103.11349v1)

Abstract: Being one of the most popular generative framework, variational autoencoders(VAE) are known to suffer from a phenomenon termed posterior collapse, i.e. the latent variational distributions collapse to the prior, especially when a strong decoder network is used. In this work, we analyze the latent representation of collapsed VAEs, and proposed a novel model, neighbor embedding VAE(NE-VAE), which explicitly constraints the encoder to encode inputs close in the input space to be close in the latent space. We observed that for VAE variants that report similar ELBO, KL divergence or even mutual information scores may still behave quite differently in the latent organization. In our experiments, NE-VAE can produce qualitatively different latent representations with majority of the latent dimensions remained active, which may benefit downstream latent space optimization tasks. NE-VAE can prevent posterior collapse to a much greater extent than it's predecessors, and can be easily plugged into any autoencoder framework, without introducing addition model components and complex training routines.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.