Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Neural Ranking using Forward Indexes and Lightweight Encoders (2311.01263v1)

Published 2 Nov 2023 in cs.IR

Abstract: Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based LLMs, which are notoriously inefficient in terms of resources and latency. We propose Fast-Forward indexes -- vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially. We perform evaluation to show the effectiveness and efficiency of Fast-Forward indexes -- our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jurek Leonhardt (11 papers)
  2. Henrik Müller (7 papers)
  3. Koustav Rudra (14 papers)
  4. Megha Khosla (35 papers)
  5. Abhijit Anand (10 papers)
  6. Avishek Anand (80 papers)
Citations (4)