Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality (2211.09267v1)

Published 16 Nov 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Human communication relies on common ground (CG), the mutual knowledge and beliefs shared by participants, to produce coherent and interesting conversations. In this paper, we demonstrate that current response generation (RG) models produce generic and dull responses in dialogues because they act reflexively, failing to explicitly model CG, both due to the lack of CG in training data and the standard RG training procedure. We introduce Reflect, a dataset that annotates dialogues with explicit CG (materialized as inferences approximating shared knowledge and beliefs) and solicits 9k diverse human-generated responses each following one common ground. Using Reflect, we showcase the limitations of current dialogue data and RG models: less than half of the responses in current data are rated as high quality (sensible, specific, and interesting) and models trained using this data have even lower quality, while most Reflect responses are judged high quality. Next, we analyze whether CG can help models produce better-quality responses by using Reflect CG to guide RG models. Surprisingly, we find that simply prompting GPT3 to "think" about CG generates 30% more quality responses, showing promising benefits to integrating CG into the RG process.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pei Zhou (30 papers)
  2. Hyundong Cho (12 papers)
  3. Pegah Jandaghi (9 papers)
  4. Dong-Ho Lee (30 papers)
  5. Bill Yuchen Lin (72 papers)
  6. Jay Pujara (44 papers)
  7. Xiang Ren (194 papers)
Citations (25)