2000 character limit reached
Adversarial Learning on the Latent Space for Diverse Dialog Generation (1911.03817v3)
Published 10 Nov 2019 in cs.CL
Abstract: Generating relevant responses in a dialog is challenging, and requires not only proper modeling of context in the conversation but also being able to generate fluent sentences during inference. In this paper, we propose a two-step framework based on generative adversarial nets for generating conditioned responses. Our model first learns a meaningful representation of sentences by autoencoding and then learns to map an input query to the response representation, which is in turn decoded as a response sentence. Both quantitative and qualitative evaluations show that our model generates more fluent, relevant, and diverse responses than existing state-of-the-art methods.
- Kashif Khan (3 papers)
- Gaurav Sahu (19 papers)
- Vikash Balasubramanian (2 papers)
- Lili Mou (79 papers)
- Olga Vechtomova (26 papers)