Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Large Language Models' Situated Faithfulness to External Contexts (2410.14675v1)

Published 18 Oct 2024 in cs.CL and cs.AI
Enhancing Large Language Models' Situated Faithfulness to External Contexts

Abstract: LLMs are often augmented with external information as contexts, but this external information can sometimes be inaccurate or even intentionally misleading. We argue that robust LLMs should demonstrate situated faithfulness, dynamically calibrating their trust in external information based on their confidence in the internal knowledge and the external context. To benchmark this capability, we evaluate LLMs across several QA datasets, including a newly created dataset called RedditQA featuring in-the-wild incorrect contexts sourced from Reddit posts. We show that when provided with both correct and incorrect contexts, both open-source and proprietary models tend to overly rely on external information, regardless of its factual accuracy. To enhance situated faithfulness, we propose two approaches: Self-Guided Confidence Reasoning (SCR) and Rule-Based Confidence Reasoning (RCR). SCR enables models to self-access the confidence of external information relative to their own internal knowledge to produce the most accurate answer. RCR, in contrast, extracts explicit confidence signals from the LLM and determines the final answer using predefined rules. Our results show that for LLMs with strong reasoning capabilities, such as GPT-4o and GPT-4o mini, SCR outperforms RCR, achieving improvements of up to 24.2% over a direct input augmentation baseline. Conversely, for a smaller model like Llama-3-8B, RCR outperforms SCR. Fine-tuning SCR with our proposed Confidence Reasoning Direct Preference Optimization (CR-DPO) method improves performance on both seen and unseen datasets, yielding an average improvement of 8.9% on Llama-3-8B. In addition to quantitative results, we offer insights into the relative strengths of SCR and RCR. Our findings highlight promising avenues for improving situated faithfulness in LLMs. The data and code are released.

Enhancing LLMs' Situated Faithfulness to External Contexts

The paper entitled "Enhancing LLMs' Situated Faithfulness to External Contexts" investigates the problem of LLMs relying excessively on external information, which can sometimes be erroneous or deliberately deceptive. The authors present the concept of "situated faithfulness," where LLMs should dynamically adjust their trust in external contexts based on their internal knowledge and the reliability of the context. The paper evaluates LLMs on diverse QA datasets, introducing a novel dataset, RedditQA, which features real-world incorrect contexts sourced from Reddit posts.

The authors note that both open-source and proprietary LLMs tend to over-rely on external information, irrespective of its accuracy. To address this, they propose two methodologies: Self-Guided Confidence Reasoning (SCR) and Rule-Based Confidence Reasoning (RCR). SCR allows models to reason about the confidence in their internal knowledge versus external context, while RCR employs explicit confidence signals from the LLM, processed by predefined rules to select the output answer.

Empirical evaluation demonstrated that SCR outperforms RCR in models with strong reasoning capabilities, such as GPT-4o, achieving improvements up to 24.2% over baseline methods. Conversely, for less powerful models like Llama-3-8B, RCR shows superior performance. The paper highlights that fine-tuning SCR with the proposed Confidence Reasoning Direct Preference Optimization (CR-DPO) further enhances performance, especially in both seen and unseen datasets, producing an average increase of 8.9% for the Llama-3-8B model.

A thorough experimental setup is provided, contrasting SCR and RCR methods against other baselines, including Direct Input Augmentation and Truth-aware Context Selection. The findings reveal that LLMs with robust reasoning capabilities excel in leveraging SCR techniques, emphasizing their adeptness at dynamically adjusting trust to ensure accurate responses.

An insightful contribution of the paper is the introduction of RedditQA, which fills a gap in existing datasets by providing human-generated, incorrect contexts. This facilitates a comprehensive evaluation of the LLM's resilience to misleading information. The work concludes by stating that addressing situated faithfulness is a promising avenue for future research in LLMs.

In a broader context, this paper has significant implications for developing more reliable AI systems, by enabling LLMs to discern the trustworthiness of their sources and invoke internal knowledge when warranted. The findings could be instrumental in enhancing LLMs' utility in applications where exact and reliable information retrieval is crucial. The contrast between SCR and RCR also provides a framework for assessing different reasoning strategies within LLMs, and the effect of model capacity on these strategies.

The paper offers an informative perspective on enhancing LLMs' ability to handle ambiguous or incorrect external information, presenting promising methods for augmenting the reliability of AI systems in real-world applications. As AI continues to be integrated into decision-making processes, ensuring models can differentiate between reliable and unreliable sources becomes increasingly vital. The insights gathered from this research could potentially guide future improvements in AI transparency, accountability, and trustworthiness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yukun Huang (39 papers)
  2. Sanxing Chen (11 papers)
  3. Hongyi Cai (8 papers)
  4. Bhuwan Dhingra (66 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com