Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Study on the Efficiency and Generalization of Light Hybrid Retrievers (2210.01371v2)

Published 4 Oct 2022 in cs.IR and cs.CL

Abstract: Hybrid retrievers can take advantage of both sparse and dense retrievers. Previous hybrid retrievers leverage indexing-heavy dense retrievers. In this work, we study "Is it possible to reduce the indexing memory of hybrid retrievers without sacrificing performance"? Driven by this question, we leverage an indexing-efficient dense retriever (i.e. DrBoost) and introduce a LITE retriever that further reduces the memory of DrBoost. LITE is jointly trained on contrastive learning and knowledge distillation from DrBoost. Then, we integrate BM25, a sparse retriever, with either LITE or DrBoost to form light hybrid retrievers. Our Hybrid-LITE retriever saves 13X memory while maintaining 98.0% performance of the hybrid retriever of BM25 and DPR. In addition, we study the generalization capacity of our light hybrid retrievers on out-of-domain dataset and a set of adversarial attacks datasets. Experiments showcase that light hybrid retrievers achieve better generalization performance than individual sparse and dense retrievers. Nevertheless, our analysis shows that there is a large room to improve the robustness of retrievers, suggesting a new research direction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Man Luo (55 papers)
  2. Shashank Jain (7 papers)
  3. Anchit Gupta (21 papers)
  4. Arash Einolghozati (21 papers)
  5. Barlas Oguz (36 papers)
  6. Debojeet Chatterjee (5 papers)
  7. Xilun Chen (31 papers)
  8. Chitta Baral (152 papers)
  9. Peyman Heidari (3 papers)
Citations (5)