Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neighbor Embedding Variational Autoencoder (2103.11349v1)

Published 21 Mar 2021 in cs.LG and stat.ML

Abstract: Being one of the most popular generative framework, variational autoencoders(VAE) are known to suffer from a phenomenon termed posterior collapse, i.e. the latent variational distributions collapse to the prior, especially when a strong decoder network is used. In this work, we analyze the latent representation of collapsed VAEs, and proposed a novel model, neighbor embedding VAE(NE-VAE), which explicitly constraints the encoder to encode inputs close in the input space to be close in the latent space. We observed that for VAE variants that report similar ELBO, KL divergence or even mutual information scores may still behave quite differently in the latent organization. In our experiments, NE-VAE can produce qualitatively different latent representations with majority of the latent dimensions remained active, which may benefit downstream latent space optimization tasks. NE-VAE can prevent posterior collapse to a much greater extent than it's predecessors, and can be easily plugged into any autoencoder framework, without introducing addition model components and complex training routines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Renfei Tu (1 paper)
  2. Yang Liu (2253 papers)
  3. Yongzeng Xue (1 paper)
  4. Cheng Wang (386 papers)
  5. Maozu Guo (9 papers)

Summary

We haven't generated a summary for this paper yet.