Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Embedding-based Zero-shot Retrieval through Query Generation (2009.10270v1)

Published 22 Sep 2020 in cs.IR

Abstract: Passage retrieval addresses the problem of locating relevant passages, usually from a large corpus, given a query. In practice, lexical term-matching algorithms like BM25 are popular choices for retrieval owing to their efficiency. However, term-based matching algorithms often miss relevant passages that have no lexical overlap with the query and cannot be finetuned to downstream datasets. In this work, we consider the embedding-based two-tower architecture as our neural retrieval model. Since labeled data can be scarce and because neural retrieval models require vast amounts of data to train, we propose a novel method for generating synthetic training data for retrieval. Our system produces remarkable results, significantly outperforming BM25 on 5 out of 6 datasets tested, by an average of 2.45 points for Recall@1. In some cases, our model trained on synthetic data can even outperform the same model trained on real data

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Davis Liang (15 papers)
  2. Peng Xu (357 papers)
  3. Siamak Shakeri (29 papers)
  4. Cicero Nogueira dos Santos (31 papers)
  5. Ramesh Nallapati (38 papers)
  6. Zhiheng Huang (33 papers)
  7. Bing Xiang (74 papers)
Citations (40)