Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data (1606.03632v3)

Published 11 Jun 2016 in cs.CL

Abstract: Natural language generation plays a critical role in spoken dialogue systems. We present a new approach to natural language generation for task-oriented dialogue using recurrent neural networks in an encoder-decoder framework. In contrast to previous work, our model uses both lexicalized and delexicalized components i.e. slot-value pairs for dialogue acts, with slots and corresponding values aligned together. This allows our model to learn from all available data including the slot-value pairing, rather than being restricted to delexicalized slots. We show that this helps our model generate more natural sentences with better grammar. We further improve our model's performance by transferring weights learnt from a pretrained sentence auto-encoder. Human evaluation of our best-performing model indicates that it generates sentences which users find more appealing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shikhar Sharma (15 papers)
  2. Jing He (65 papers)
  3. Kaheer Suleman (19 papers)
  4. Hannes Schulz (15 papers)
  5. Philip Bachman (25 papers)
Citations (29)