Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging External and Parametric Knowledge: Mitigating Hallucination of LLMs with Shared-Private Semantic Synergy in Dual-Stream Knowledge (2506.06240v1)

Published 6 Jun 2025 in cs.CL

Abstract: Retrieval-augmented generation (RAG) is a cost-effective approach to mitigate the hallucination of LLMs by incorporating the retrieved external knowledge into the generation process. However, external knowledge may conflict with the parametric knowledge of LLMs. Furthermore, current LLMs lack inherent mechanisms for resolving such knowledge conflicts, making traditional RAG methods suffer from degraded performance and stability. Thus, we propose a Dual-Stream Knowledge-Augmented Framework for Shared-Private Semantic Synergy (DSSP-RAG). Central to the framework is a novel approach that refines self-attention into a mixed-attention, distinguishing shared and private semantics for a controlled internal-external knowledge integration. To effectively facilitate DSSP in RAG, we further introduce an unsupervised hallucination detection method based on cognitive uncertainty, ensuring the necessity of introducing knowledge, and an Energy Quotient (EQ) based on attention difference matrices to reduce noise in the retrieved external knowledge. Extensive experiments on benchmark datasets show that DSSP-RAG can effectively resolve conflicts and enhance the complementarity of dual-stream knowledge, leading to superior performance over strong baselines.

Summary

  • The paper introduces a dual-stream framework merging external and parametric knowledge to detect and mitigate LLM hallucinations via mixed attention.
  • It employs unsupervised hallucination detection using cognitive uncertainty and an Energy Quotient filtering mechanism to refine retrieved data.
  • Experimental results demonstrate significant improvements in multi-hop reasoning tasks and overall model reliability.

Mitigating Hallucination in LLMs through a Dual-Stream Approach

The paper "Bridging External and Parametric Knowledge: Mitigating Hallucination of LLMs with Shared-Private Semantic Synergy in Dual-Stream Knowledge" introduces a novel approach to further augment retrieval-augmented generation (RAG) frameworks in LLMs. The issue being primarily addressed is the 'hallucination' problem in LLMs, where these models produce incorrect or fabricated information because they rely on outdated or conflicting sources of knowledge. Traditional retrieval-augmented methods, while effective to some degree, struggle with integrating external knowledge coherently with the pre-trained parametric knowledge embedded within the model.

The Dual-Stream Knowledge Framework

The authors propose a "Dual-Stream Knowledge-Augmented Framework for Shared-Private Semantic Synergy" (DSSP-RAG). The core idea is to refine the self-attention mechanism into a mixed-attention model to better manage the integration of internal and external knowledge. This mixed attention distinguishes between shared semantics that overlap internal and external resources and private semantics unique to each knowledge source.

Key Components and Methodology

  1. Hallucination Detection: The paper introduces an unsupervised method for hallucination detection based on cognitive uncertainty. This method analyzes variations in model responses to semantically similar prompts to identify when the model's internal knowledge might be flawed or inconsistent.
  2. Energy Quotient (EQ) Filtering: To filter out noise from external data, an EQ derived from attention difference matrices is used. This EQ process filters retrieved external knowledge, emphasizing relevant information and downgrading noise or redundancy.
  3. Shared and Private Semantics: The mixed attention strategy in DSSP-RAG involves decomposing knowledge into shared and private semantics. Shared semantics ensure consistency between the external and internal data, while private semantics identify unique contributions from each source, thereby minimizing conflict.
  4. Regularization via Conditional Entropy: The model employs a regularization strategy using conditional entropy and KL divergence. This approach improves the model's ability to adjust confidence in its predictions based on the source and reliability of the knowledge used.

Experimental Results

Extensive experiments demonstrate that the DSSP-RAG framework outperforms existing RAG-based methods across various benchmark datasets. The method shows significant improvements in tasks requiring complex reasoning and multi-hop information integration, demonstrating a noteworthy reduction in hallucination rates without compromising inference efficiency.

Implications and Future Work

This research advances the understanding of how models can dynamically integrate and differentiate between types of knowledge. The dual-stream approach potentially paves the way for more refined mechanisms in knowledge augmentation, enhancing model reliability and performance in tasks demanding current and accurate information. Future work might explore adaptive mechanisms further, incorporating real-world updates into LLMs in real-time and scaling the framework to encompass even broader bases of parametric knowledge and diversified external sources. Such developments could improve LLM performance in rapidly changing domains where knowledge is continuously evolving.