Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Variational Cross-domain Natural Language Generation for Spoken Dialogue Systems (1812.08879v1)

Published 20 Dec 2018 in cs.CL and cs.AI

Abstract: Cross-domain natural language generation (NLG) is still a difficult task within spoken dialogue modelling. Given a semantic representation provided by the dialogue manager, the language generator should generate sentences that convey desired information. Traditional template-based generators can produce sentences with all necessary information, but these sentences are not sufficiently diverse. With RNN-based models, the diversity of the generated sentences can be high, however, in the process some information is lost. In this work, we improve an RNN-based generator by considering latent information at the sentence level during generation using the conditional variational autoencoder architecture. We demonstrate that our model outperforms the original RNN-based generator, while yielding highly diverse sentences. In addition, our model performs better when the training data is limited.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bo-Hsiang Tseng (20 papers)
  2. Florian Kreyssig (4 papers)
  3. Yen-Chen Wu (7 papers)
  4. Stefan Ultes (32 papers)
  5. Pawel Budzianowski (4 papers)
  6. Inigo Casanueva (3 papers)
  7. Milica Gasic (18 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.