Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Sentence Knowledge Selection in Open-Domain Dialogue (2203.00763v2)

Published 1 Mar 2022 in cs.CL

Abstract: Incorporating external knowledge sources effectively in conversations is a longstanding problem in open-domain dialogue research. The existing literature on open-domain knowledge selection is limited and makes certain brittle assumptions on knowledge sources to simplify the overall task (Dinan et al., 2019), such as the existence of a single relevant knowledge sentence per context. In this work, we evaluate the existing state of open-domain conversation knowledge selection, showing where the existing methodologies regarding data and evaluation are flawed. We then improve on them by proposing a new framework for collecting relevant knowledge, and create an augmented dataset based on the Wizard of Wikipedia (WOW) corpus, which we call WOW++. WOW++ averages 8 relevant knowledge sentences per dialogue context, embracing the inherent ambiguity of open-domain dialogue knowledge selection. We then benchmark various knowledge ranking algorithms on this augmented dataset with both intrinsic evaluation and extrinsic measures of response quality, showing that neural rerankers that use WOW++ can outperform rankers trained on standard datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mihail Eric (14 papers)
  2. Nicole Chartier (2 papers)
  3. Behnam Hedayatnia (27 papers)
  4. Karthik Gopalakrishnan (34 papers)
  5. Pankaj Rajan (3 papers)
  6. Yang Liu (2253 papers)
  7. Dilek Hakkani-Tur (94 papers)
Citations (13)