Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text Retrieval with Multi-Stage Re-Ranking Models (2311.07994v1)

Published 14 Nov 2023 in cs.IR

Abstract: The text retrieval is the task of retrieving similar documents to a search query, and it is important to improve retrieval accuracy while maintaining a certain level of retrieval speed. Existing studies have reported accuracy improvements using LLMs, but many of these do not take into account the reduction in search speed that comes with increased performance. In this study, we propose three-stage re-ranking model using model ensembles or larger LLMs to improve search accuracy while minimizing the search delay. We ranked the documents by BM25 and LLMs, and then re-ranks by a model ensemble or a larger LLM for documents with high similarity to the query. In our experiments, we train the MiniLM LLM on the MS-MARCO dataset and evaluate it in a zero-shot setting. Our proposed method achieves higher retrieval accuracy while reducing the retrieval speed decay.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuichi Sasazawa (4 papers)
  2. Kenichi Yokote (1 paper)
  3. Osamu Imaichi (3 papers)
  4. Yasuhiro Sogawa (13 papers)
Citations (1)