Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Zero-shot LLM Re-Ranker with Risk Minimization (2406.13331v2)

Published 19 Jun 2024 in cs.CL

Abstract: In the Retrieval-Augmented Generation (RAG) system, advanced LLMs have emerged as effective Query Likelihood Models (QLMs) in an unsupervised way, which re-rank documents based on the probability of generating the query given the content of a document. However, directly prompting LLMs to approximate QLMs inherently is biased, where the estimated distribution might diverge from the actual document-specific distribution. In this study, we introduce a novel framework, $\mathrm{UR3}$, which leverages Bayesian decision theory to both quantify and mitigate this estimation bias. Specifically, $\mathrm{UR3}$ reformulates the problem as maximizing the probability of document generation, thereby harmonizing the optimization of query and document generation probabilities under a unified risk minimization objective. Our empirical results indicate that $\mathrm{UR3}$ significantly enhances re-ranking, particularly in improving the Top-1 accuracy. It benefits the QA tasks by achieving higher accuracy with fewer input documents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiaowei Yuan (8 papers)
  2. Zhao Yang (75 papers)
  3. Yequan Wang (44 papers)
  4. Jun Zhao (469 papers)
  5. Kang Liu (207 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com