Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

R$^3$: Reinforced Reader-Ranker for Open-Domain Question Answering (1709.00023v2)

Published 31 Aug 2017 in cs.CL and cs.AI

Abstract: In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closed-domain settings such as the SQuAD (Rajpurkar et al., 2016) dataset, which provides a pre-selected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia) instead of a pre-selected passage (Chen et al., 2017a). This setting is more complex as it requires large-scale search for relevant passages by an information retrieval component, combined with a reading comprehension model that "reads" the passages to generate an answer to the question. Performance in this setting lags considerably behind closed-domain performance. In this paper, we present a novel open-domain QA system called Reinforced Ranker-Reader $(R3)$, based on two algorithmic innovations. First, we propose a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of generating the ground-truth answer to a given question. Second, we propose a novel method that jointly trains the Ranker along with an answer-generation Reader model, based on reinforcement learning. We report extensive experimental results showing that our method significantly improves on the state of the art for multiple open-domain QA datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Shuohang Wang (69 papers)
  2. Mo Yu (117 papers)
  3. Xiaoxiao Guo (38 papers)
  4. Zhiguo Wang (100 papers)
  5. Tim Klinger (23 papers)
  6. Wei Zhang (1489 papers)
  7. Shiyu Chang (120 papers)
  8. Gerald Tesauro (29 papers)
  9. Bowen Zhou (141 papers)
  10. Jing Jiang (192 papers)
Citations (65)