Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InstUPR : Instruction-based Unsupervised Passage Reranking with Large Language Models (2403.16435v1)

Published 25 Mar 2024 in cs.CL and cs.IR

Abstract: This paper introduces InstUPR, an unsupervised passage reranking method based on LLMs. Different from existing approaches that rely on extensive training with query-document pairs or retrieval-specific instructions, our method leverages the instruction-following capabilities of instruction-tuned LLMs for passage reranking without any additional fine-tuning. To achieve this, we introduce a soft score aggregation technique and employ pairwise reranking for unsupervised passage reranking. Experiments on the BEIR benchmark demonstrate that InstUPR outperforms unsupervised baselines as well as an instruction-tuned reranker, highlighting its effectiveness and superiority. Source code to reproduce all experiments is open-sourced at https://github.com/MiuLab/InstUPR

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Chao-Wei Huang (28 papers)
  2. Yun-Nung Chen (104 papers)