Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging the Training-Inference Gap for Dense Phrase Retrieval (2210.13678v1)

Published 25 Oct 2022 in cs.CL, cs.IR, and cs.LG

Abstract: Building dense retrievers requires a series of standard procedures, including training and validating neural models and creating indexes for efficient search. However, these procedures are often misaligned in that training objectives do not exactly reflect the retrieval scenario at inference time. In this paper, we explore how the gap between training and inference in dense retrieval can be reduced, focusing on dense phrase retrieval (Lee et al., 2021) where billions of representations are indexed at inference. Since validating every dense retriever with a large-scale index is practically infeasible, we propose an efficient way of validating dense retrievers using a small subset of the entire corpus. This allows us to validate various training strategies including unifying contrastive loss terms and using hard negatives for phrase retrieval, which largely reduces the training-inference discrepancy. As a result, we improve top-1 phrase retrieval accuracy by 2~3 points and top-20 passage retrieval accuracy by 2~4 points for open-domain question answering. Our work urges modeling dense retrievers with careful consideration of training and inference via efficient validation while advancing phrase retrieval as a general solution for dense retrieval.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Gyuwan Kim (20 papers)
  2. Jinhyuk Lee (27 papers)
  3. Barlas Oguz (36 papers)
  4. Wenhan Xiong (47 papers)
  5. Yizhe Zhang (127 papers)
  6. Yashar Mehdad (37 papers)
  7. William Yang Wang (254 papers)
Citations (2)