Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Synthetic Target Domain Supervision for Open Retrieval QA (2204.09248v1)

Published 20 Apr 2022 in cs.CL and cs.IR

Abstract: Neural passage retrieval is a new and promising approach in open retrieval question answering. In this work, we stress-test the Dense Passage Retriever (DPR) -- a state-of-the-art (SOTA) open domain neural retrieval model -- on closed and specialized target domains such as COVID-19, and find that it lags behind standard BM25 in this important real-world setting. To make DPR more robust under domain shift, we explore its fine-tuning with synthetic training examples, which we generate from unlabeled target domain text using a text-to-text generator. In our experiments, this noisy but fully automated target domain supervision gives DPR a sizable advantage over BM25 in out-of-domain settings, making it a more viable model in practice. Finally, an ensemble of BM25 and our improved DPR model yields the best results, further pushing the SOTA for open retrieval QA on multiple out-of-domain test sets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Revanth Gangi Reddy (25 papers)
  2. Bhavani Iyer (6 papers)
  3. Md Arafat Sultan (25 papers)
  4. Rong Zhang (133 papers)
  5. Avirup Sil (45 papers)
  6. Vittorio Castelli (24 papers)
  7. Radu Florian (54 papers)
  8. Salim Roukos (41 papers)
Citations (10)