Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Topic-Guided Variational Autoencoders for Text Generation (1903.07137v1)

Published 17 Mar 2019 in cs.CL

Abstract: We propose a topic-guided variational autoencoder (TGVAE) model for text generation. Distinct from existing variational autoencoder (VAE) based approaches, which assume a simple Gaussian prior for the latent code, our model specifies the prior as a Gaussian mixture model (GMM) parametrized by a neural topic module. Each mixture component corresponds to a latent topic, which provides guidance to generate sentences under the topic. The neural topic module and the VAE-based neural sequence module in our model are learned jointly. In particular, a sequence of invertible Householder transformations is applied to endow the approximate posterior of the latent code with high flexibility during model inference. Experimental results show that our TGVAE outperforms alternative approaches on both unconditional and conditional text generation, which can generate semantically-meaningful sentences with various topics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Wenlin Wang (27 papers)
  2. Zhe Gan (135 papers)
  3. Hongteng Xu (67 papers)
  4. Ruiyi Zhang (98 papers)
  5. Guoyin Wang (108 papers)
  6. Dinghan Shen (34 papers)
  7. Changyou Chen (108 papers)
  8. Lawrence Carin (203 papers)
Citations (124)