Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Batch Normalized Inference Network Keeps the KL Vanishing Away (2004.12585v2)

Published 27 Apr 2020 in cs.LG and cs.CL

Abstract: Variational Autoencoder (VAE) is widely used as a generative model to approximate a model's posterior on latent variables by combining the amortized variational inference and deep neural networks. However, when paired with strong autoregressive decoders, VAE often converges to a degenerated local optimum known as "posterior collapse". Previous approaches consider the Kullback Leibler divergence (KL) individual for each datapoint. We propose to let the KL follow a distribution across the whole dataset, and analyze that it is sufficient to prevent posterior collapse by keeping the expectation of the KL's distribution positive. Then we propose Batch Normalized-VAE (BN-VAE), a simple but effective approach to set a lower bound of the expectation by regularizing the distribution of the approximate posterior's parameters. Without introducing any new model component or modifying the objective, our approach can avoid the posterior collapse effectively and efficiently. We further show that the proposed BN-VAE can be extended to conditional VAE (CVAE). Empirically, our approach surpasses strong autoregressive baselines on LLMing, text classification and dialogue generation, and rivals more complex approaches while keeping almost the same training time as VAE.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qile Zhu (8 papers)
  2. Jianlin Su (31 papers)
  3. Wei Bi (62 papers)
  4. Xiaojiang Liu (27 papers)
  5. Xiyao Ma (6 papers)
  6. Xiaolin Li (54 papers)
  7. Dapeng Wu (52 papers)
Citations (55)

Summary

We haven't generated a summary for this paper yet.