Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

R2-D2: A Modular Baseline for Open-Domain Question Answering (2109.03502v1)

Published 8 Sep 2021 in cs.CL, cs.IR, and cs.LG

Abstract: This work presents a novel four-stage open-domain QA pipeline R2-D2 (Rank twice, reaD twice). The pipeline is composed of a retriever, passage reranker, extractive reader, generative reader and a mechanism that aggregates the final prediction from all system's components. We demonstrate its strength across three open-domain QA datasets: NaturalQuestions, TriviaQA and EfficientQA, surpassing state-of-the-art on the first two. Our analysis demonstrates that: (i) combining extractive and generative reader yields absolute improvements up to 5 exact match and it is at least twice as effective as the posterior averaging ensemble of the same models with different parameters, (ii) the extractive reader with fewer parameters can match the performance of the generative reader on extractive QA datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Martin Fajcik (16 papers)
  2. Martin Docekal (9 papers)
  3. Karel Ondrej (5 papers)
  4. Pavel Smrz (17 papers)
Citations (44)

Summary

We haven't generated a summary for this paper yet.