Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Document Ranking with Learnable Late Interactions (2406.17968v1)

Published 25 Jun 2024 in cs.IR, cs.AI, cs.LG, and stat.ML

Abstract: Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings; usually, the former has higher quality while the latter benefits from lower latency. Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden. In this paper, we propose novel learnable late-interaction models (LITE) that resolve these issues. Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25x storage compared to ColBERT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ziwei Ji (42 papers)
  2. Himanshu Jain (19 papers)
  3. Andreas Veit (29 papers)
  4. Sashank J. Reddi (43 papers)
  5. Sadeep Jayasumana (19 papers)
  6. Ankit Singh Rawat (64 papers)
  7. Aditya Krishna Menon (56 papers)
  8. Felix Yu (62 papers)
  9. Sanjiv Kumar (123 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets