Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Skeleton-to-Response: Dialogue Generation Guided by Retrieval Memory (1809.05296v5)

Published 14 Sep 2018 in cs.CL

Abstract: For dialogue response generation, traditional generative models generate responses solely from input queries. Such models rely on insufficient information for generating a specific response since a certain query could be answered in multiple ways. Consequentially, those models tend to output generic and dull responses, impeding the generation of informative utterances. Recently, researchers have attempted to fill the information gap by exploiting information retrieval techniques. When generating a response for a current query, similar dialogues retrieved from the entire training data are considered as an additional knowledge source. While this may harvest massive information, the generative models could be overwhelmed, leading to undesirable performance. In this paper, we propose a new framework which exploits retrieval results via a skeleton-then-response paradigm. At first, a skeleton is generated by revising the retrieved responses. Then, a novel generative model uses both the generated skeleton and the original query for response generation. Experimental results show that our approaches significantly improve the diversity and informativeness of the generated responses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Deng Cai (181 papers)
  2. Yan Wang (733 papers)
  3. Victoria Bi (2 papers)
  4. Zhaopeng Tu (135 papers)
  5. Xiaojiang Liu (27 papers)
  6. Wai Lam (117 papers)
  7. Shuming Shi (126 papers)
Citations (88)