Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RaFe: Ranking Feedback Improves Query Rewriting for RAG (2405.14431v1)

Published 23 May 2024 in cs.CL, cs.AI, and cs.IR

Abstract: As LLMs and Retrieval Augmentation Generation (RAG) techniques have evolved, query rewriting has been widely incorporated into the RAG system for downstream tasks like open-domain QA. Many works have attempted to utilize small models with reinforcement learning rather than costly LLMs to improve query rewriting. However, current methods require annotations (e.g., labeled relevant documents or downstream answers) or predesigned rewards for feedback, which lack generalization, and fail to utilize signals tailored for query rewriting. In this paper, we propose ours, a framework for training query rewriting models free of annotations. By leveraging a publicly available reranker, ours~provides feedback aligned well with the rewriting objectives. Experimental results demonstrate that ours~can obtain better performance than baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Shengyu Mao (11 papers)
  2. Yong Jiang (194 papers)
  3. Boli Chen (23 papers)
  4. Xiao Li (354 papers)
  5. Peng Wang (831 papers)
  6. Xinyu Wang (186 papers)
  7. Pengjun Xie (85 papers)
  8. Fei Huang (408 papers)
  9. Huajun Chen (198 papers)
  10. Ningyu Zhang (148 papers)
Citations (10)