Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
48 tokens/sec
GPT-5 Medium
15 tokens/sec
GPT-5 High Premium
23 tokens/sec
GPT-4o
104 tokens/sec
DeepSeek R1 via Azure Premium
77 tokens/sec
GPT OSS 120B via Groq Premium
466 tokens/sec
Kimi K2 via Groq Premium
201 tokens/sec
2000 character limit reached

Is Relevance Propagated from Retriever to Generator in RAG? (2502.15025v1)

Published 20 Feb 2025 in cs.IR

Abstract: Retrieval Augmented Generation (RAG) is a framework for incorporating external knowledge, usually in the form of a set of documents retrieved from a collection, as a part of a prompt to a LLM to potentially improve the performance of a downstream task, such as question answering. Different from a standard retrieval task's objective of maximising the relevance of a set of top-ranked documents, a RAG system's objective is rather to maximise their total utility, where the utility of a document indicates whether including it as a part of the additional contextual information in an LLM prompt improves a downstream task. Existing studies investigate the role of the relevance of a RAG context for knowledge-intensive language tasks (KILT), where relevance essentially takes the form of answer containment. In contrast, in our work, relevance corresponds to that of topical overlap between a query and a document for an information seeking task. Specifically, we make use of an IR test collection to empirically investigate whether a RAG context comprised of topically relevant documents leads to improved downstream performance. Our experiments lead to the following findings: (a) there is a small positive correlation between relevance and utility; (b) this correlation decreases with increasing context sizes (higher values of k in k-shot); and (c) a more effective retrieval model generally leads to better downstream RAG performance.

Summary

Insights into the Relevance Propagation in Retrieval Augmented Generation Systems

The paper "Is Relevance Propagated from Retriever to Generator in RAG?" by Tian et al. provides an insightful examination of the interplay between the retriever and generator components within the Retrieval Augmented Generation (RAG) framework. This paper is anchored in the domain of information retrieval and examines the efficacy of document retrieval as it pertains to enhancing the output quality of LLMs in downstream generation tasks.

RAG systems aim to integrate external contextual knowledge retrieved from document collections into LLM prompts to improve task performance, such as question answering. Traditional performance metrics in information retrieval (IR) systems focus on maximizing the relevance of top-ranked documents. However, RAG systems prioritize optimizing the utility of retrieved contexts—defined as the improvement in task performance when contextual information is included in the LLM prompts.

Key Findings

The experiments are conducted using two different retrieval models—BM25, a lexical unsupervised model, and MonoT5, a supervised neural model—on the TREC Deep Learning (DL) datasets and the MS MARCO dev_small set. Additionally, oracle setups using entirely relevant or non-relevant document contexts are considered to establish benchmarks for performance assessment.

Key Observations:

  1. Correlation Between Relevance and Utility:
    • There is a small positive correlation between the relevance of the retrieved context and the utility of the downstream task performance. This correlation suggests that although a more relevant context tends to bolster utility, much of it is dissipated in the generation phase.
    • This correlation diminishes with increasing context size, indicating that larger contexts might dilute the relevance during the passage from the retriever to the generator.
  2. Influence of Retrieval Model Quality:
    • The effectiveness of retrieval models significantly affects the utility. MonoT5, being a more effective retriever compared to BM25, yields better downstream task performance.
    • When the order given by relevance scores is not followed (i.e., when the context is reversed), there is a notable degradation in utility. This stresses the importance of document ranking in relevance propagation.
  3. Oracle Context Observations:
    • Contexts composed entirely of relevant documents frequently outperform those comprised of retrieved documents, underscoring the influence of relevance but also highlighting that improved task performance can derive from relevance not fully captured by IR metrics.

Implications and Future Directions

This paper reveals the nuanced relationship between IR-based document retrieval and the downstream performance of LLMs in RAG systems. For practitioners, these findings emphasize the importance of employing highly effective retrieval mechanisms and maintaining optimal document ranking to maximize task utility. For researchers, the work opens avenues to explore alternative retrieval approaches that could seamlessly integrate relevance into generation tasks, perhaps by learning task-specific retrieval models.

Further exploration into query performance prediction (QPP) methods could help estimate the relevance of contexts without reliance on explicit relevance assessments, thus making the RAG workflows more adaptive to arbitrary query inputs.

Conclusion

This paper systematically interrogates the relevance-utility transmission in RAG frameworks and provides empirical evidence of the challenges involved in maintaining relevance from retrieval to generation phases. As RAG systems become increasingly pivotal in knowledge-intensive tasks, understanding and optimizing relevance propagation mechanisms will be key to elevating LLM outputs across diverse applications.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.