Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text (1909.00868v1)

Published 2 Sep 2019 in cs.LG, cs.CL, and stat.ML

Abstract: When trained effectively, the Variational Autoencoder (VAE) is both a powerful LLM and an effective representation learning framework. In practice, however, VAEs are trained with the evidence lower bound (ELBO) as a surrogate objective to the intractable marginal data likelihood. This approach to training yields unstable results, frequently leading to a disastrous local optimum known as posterior collapse. In this paper, we investigate a simple fix for posterior collapse which yields surprisingly effective results. The combination of two known heuristics, previously considered only in isolation, substantially improves held-out likelihood, reconstruction, and latent representation learning when compared with previous state-of-the-art methods. More interestingly, while our experiments demonstrate superiority on these principle evaluations, our method obtains a worse ELBO. We use these results to argue that the typical surrogate objective for VAEs may not be sufficient or necessarily appropriate for balancing the goals of representation learning and data distribution modeling.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Bohan Li (88 papers)
  2. Junxian He (67 papers)
  3. Graham Neubig (342 papers)
  4. Taylor Berg-Kirkpatrick (106 papers)
  5. Yiming Yang (152 papers)
Citations (68)

Summary

We haven't generated a summary for this paper yet.