Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing "Implicit" Retrieval Robustness of Large Language Models (2406.18134v1)

Published 26 Jun 2024 in cs.CL

Abstract: Retrieval-augmented generation has gained popularity as a framework to enhance LLMs with external knowledge. However, its effectiveness hinges on the retrieval robustness of the model. If the model lacks retrieval robustness, its performance is constrained by the accuracy of the retriever, resulting in significant compromises when the retrieved context is irrelevant. In this paper, we evaluate the "implicit" retrieval robustness of various LLMs, instructing them to directly output the final answer without explicitly judging the relevance of the retrieved context. Our findings reveal that fine-tuning on a mix of gold and distracting context significantly enhances the model's robustness to retrieval inaccuracies, while still maintaining its ability to extract correct answers when retrieval is accurate. This suggests that LLMs can implicitly handle relevant or irrelevant retrieved context by learning solely from the supervision of the final answer in an end-to-end manner. Introducing an additional process for explicit relevance judgment can be unnecessary and disrupts the end-to-end approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiaoyu Shen (73 papers)
  2. Rexhina Blloshmi (5 papers)
  3. Dawei Zhu (46 papers)
  4. Jiahuan Pei (16 papers)
  5. Wei Zhang (1489 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com