Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Encoder-Decoder Incompatibility in Variational Text Modeling and Beyond (2004.09189v1)

Published 20 Apr 2020 in cs.CL, cs.LG, and stat.ML

Abstract: Variational autoencoders (VAEs) combine latent variables with amortized variational inference, whose optimization usually converges into a trivial local optimum termed posterior collapse, especially in text modeling. By tracking the optimization dynamics, we observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold. We argue that the trivial local optimum may be avoided by improving the encoder and decoder parameterizations since the posterior network is part of a transition map between them. To this end, we propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure and improves the encoder and decoder parameterizations via encoder weight sharing and decoder signal matching. We apply the proposed Coupled-VAE approach to various VAE models with different regularization, posterior family, decoder structure, and optimization strategy. Experiments on benchmark datasets (i.e., PTB, Yelp, and Yahoo) show consistently improved results in terms of probability estimation and richness of the latent space. We also generalize our method to conditional LLMing and propose Coupled-CVAE, which largely improves the diversity of dialogue generation on the Switchboard dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chen Wu (169 papers)
  2. Prince Zizhuang Wang (4 papers)
  3. William Yang Wang (254 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.