Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Rewriting to Remembering: Common Ground for Conversational QA Models (2204.03930v1)

Published 8 Apr 2022 in cs.CL

Abstract: In conversational QA, models have to leverage information in previous turns to answer upcoming questions. Current approaches, such as Question Rewriting, struggle to extract relevant information as the conversation unwinds. We introduce the Common Ground (CG), an approach to accumulate conversational information as it emerges and select the relevant information at every turn. We show that CG offers a more efficient and human-like way to exploit conversational information compared to existing approaches, leading to improvements on Open Domain Conversational QA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiaoyu Shen (73 papers)
  2. Gianni Barlacchi (10 papers)
  3. Bill Byrne (57 papers)
  4. AdriĆ  de Gispert (16 papers)
  5. Marco del Tredici (13 papers)
Citations (10)