Variational Cross-domain Natural Language Generation for Spoken Dialogue Systems (1812.08879v1)
Abstract: Cross-domain natural language generation (NLG) is still a difficult task within spoken dialogue modelling. Given a semantic representation provided by the dialogue manager, the language generator should generate sentences that convey desired information. Traditional template-based generators can produce sentences with all necessary information, but these sentences are not sufficiently diverse. With RNN-based models, the diversity of the generated sentences can be high, however, in the process some information is lost. In this work, we improve an RNN-based generator by considering latent information at the sentence level during generation using the conditional variational autoencoder architecture. We demonstrate that our model outperforms the original RNN-based generator, while yielding highly diverse sentences. In addition, our model performs better when the training data is limited.
- Bo-Hsiang Tseng (20 papers)
- Florian Kreyssig (4 papers)
- Yen-Chen Wu (7 papers)
- Stefan Ultes (32 papers)
- Pawel Budzianowski (4 papers)
- Inigo Casanueva (3 papers)
- Milica Gasic (18 papers)