Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quality and Cost Trade-offs in Passage Re-ranking Task (2111.09927v1)

Published 18 Nov 2021 in cs.IR and cs.CL

Abstract: Deep learning models named transformers achieved state-of-the-art results in a vast majority of NLP tasks at the cost of increased computational complexity and high memory consumption. Using the transformer model in real-time inference becomes a major challenge when implemented in production, because it requires expensive computational resources. The more executions of a transformer are needed the lower the overall throughput is, and switching to the smaller encoders leads to the decrease of accuracy. Our paper is devoted to the problem of how to choose the right architecture for the ranking step of the information retrieval pipeline, so that the number of required calls of transformer encoder is minimal with the maximum achievable quality of ranking. We investigated several late-interaction models such as Colbert and Poly-encoder architectures along with their modifications. Also, we took care of the memory footprint of the search index and tried to apply the learning-to-hash method to binarize the output vectors from the transformer encoders. The results of the evaluation are provided using TREC 2019-2021 and MS Marco dev datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pavel Podberezko (2 papers)
  2. Vsevolod Mitskevich (1 paper)
  3. Raman Makouski (1 paper)
  4. Pavel Goncharov (5 papers)
  5. Andrei Khobnia (1 paper)
  6. Nikolay Bushkov (2 papers)
  7. Marina Chernyshevich (1 paper)

Summary

We haven't generated a summary for this paper yet.