Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models (2309.01219v2)

Published 3 Sep 2023 in cs.CL, cs.AI, cs.CY, and cs.LG
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models

Abstract: While LLMs have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.

Survey on Hallucination in LLMs

The paper "Siren's Song in the AI Ocean: A Survey on Hallucination in LLMs" provides a comprehensive overview of the phenomena of hallucinations in LLMs. Hallucination, as defined here, refers to the generation of responses by LLMs that deviate from user input, contradict prior generated content, or conflict with established world knowledge. The survey addresses the challenge of hallucinations, which significantly impact the reliability of LLMs in practice.

Key Contributions

The authors systematically categorize hallucinations in LLMs into three types:

  1. Input-conflicting Hallucination: The responses from LLMs deviate from the user's input or given instructions, similar to inconsistencies observed in task-specific models like machine translation and summarization.
  2. Context-conflicting Hallucination: This occurs when LLMs generate content that contrasts with previously produced output, highlighting issues with maintaining contextual consistency.
  3. Fact-conflicting Hallucination: The most emphasized among the three, arises when LLMs produce information not aligned with factual knowledge.

Evaluation Frameworks and Methodologies

The paper reviews several benchmarks and metrics used to evaluate hallucination. Notably, evaluation varies between two primary methodologies: the generation approach, which assesses the quality of LLM-generated text, and the discrimination approach, which evaluates the model’s capability to distinguish between factual and non-factual statements. However, obtaining reliable human evaluation remains a significant part of assessing hallucination due to the nuanced nature of the phenomena that existing automated metrics struggle to capture accurately.

Sources and Mitigation Strategies

Sources of Hallucinations

The survey identifies several sources of hallucinations emanating throughout the lifecycle of LLMs:

  • Training Data: Massive and often noisy corpora used during pre-training can embed outdated or erroneous knowledge.
  • Model Architecture and Processes: Overconfidence issues, where LLMs might overestimate their knowledge, leading to unfaithful content generation.
  • Decoding Processes: Strategies such as random sampling in text generation might introduce hallucinations due to extensive diversity-seeking behavior.

Mitigation During Lifecycle Phases

Various strategies are proposed to mitigate hallucinations at different stages:

  • Pre-training Phase: Data curation, both automatic and manual, aims to ensure the integrity of pre-training corpuses.
  • Supervised Fine-Tuning (SFT): Care in constructing fine-tuning datasets by excluding samples that might promote hallucinations.
  • Inference Time Techniques: Employ decoding strategies to align generation more closely with factually accurate and contextually coherent content. Additionally, integrating retrieval-augmented approaches where models access external factual data sources can help substantiate generated responses and rectify inaccuracies.

Future Directions and Challenges

While significant progress has been achieved, the paper acknowledges that challenges remain in evaluating multi-lingual and multi-modal hallucinations, establishing validation coherence between tasks, and adapting strategies for real-time data injection. Moreover, there's an emphasized need for developing more robust and standardized benchmarks to refine the efficacy of hallucination detection and mitigation approaches.

This paper calls for ongoing and future research efforts targeting these unresolved areas and others, such as understanding utility trade-offs between models’ helpfulness and truthfulness during system tuning processes like RLHF.

In conclusion, addressing hallucination in LLMs is fundamental to enhancing decision-making reliability across various applications. The survey lays the groundwork for more refined approaches and solutions across both research and industry contexts. The authors underscore the necessity of continued advancements in aligning LLM outputs with factual human knowledge, underlining a key facet of improving AI trustworthiness and application in real-world scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Yue Zhang (618 papers)
  2. Yafu Li (26 papers)
  3. Leyang Cui (50 papers)
  4. Deng Cai (181 papers)
  5. Lemao Liu (62 papers)
  6. Tingchen Fu (14 papers)
  7. Xinting Huang (36 papers)
  8. Enbo Zhao (8 papers)
  9. Yu Zhang (1399 papers)
  10. Yulong Chen (32 papers)
  11. Longyue Wang (87 papers)
  12. Anh Tuan Luu (69 papers)
  13. Wei Bi (62 papers)
  14. Freda Shi (16 papers)
  15. Shuming Shi (126 papers)
Citations (388)
Youtube Logo Streamline Icon: https://streamlinehq.com