Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering (1711.05116v2)

Published 14 Nov 2017 in cs.CL and cs.AI

Abstract: A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evidence from across different sources to answer correctly. In this paper, we propose two models which make use of multiple passages to generate their answers. Both use an answer-reranking approach which reorders the answer candidates generated by an existing state-of-the-art QA model. We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer. Our models have achieved state-of-the-art results on three public open-domain QA datasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8 percentage points of improvement over the former two datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Shuohang Wang (69 papers)
  2. Mo Yu (117 papers)
  3. Jing Jiang (192 papers)
  4. Wei Zhang (1489 papers)
  5. Xiaoxiao Guo (38 papers)
  6. Shiyu Chang (120 papers)
  7. Zhiguo Wang (100 papers)
  8. Tim Klinger (23 papers)
  9. Gerald Tesauro (29 papers)
  10. Murray Campbell (27 papers)
Citations (158)