Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Open-Domain Conversational Question Answering with Historical Answers (2211.09401v1)

Published 17 Nov 2022 in cs.CL

Abstract: Open-domain conversational question answering can be viewed as two tasks: passage retrieval and conversational question answering, where the former relies on selecting candidate passages from a large corpus and the latter requires better understanding of a question with contexts to predict the answers. This paper proposes ConvADR-QA that leverages historical answers to boost retrieval performance and further achieves better answering performance. In our proposed framework, the retrievers use a teacher-student framework to reduce noises from previous turns. Our experiments on the benchmark dataset, OR-QuAC, demonstrate that our model outperforms existing baselines in both extractive and generative reader settings, well justifying the effectiveness of historical answers for open-domain conversational question answering.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hung-Chieh Fang (4 papers)
  2. Kuo-Han Hung (4 papers)
  3. Chao-Wei Huang (28 papers)
  4. Yun-Nung Chen (104 papers)
Citations (7)