Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations (2310.04988v2)

Published 8 Oct 2023 in cs.AI

Abstract: The recent advancements in LLMs have garnered widespread acclaim for their remarkable emerging capabilities. However, the issue of hallucination has parallelly emerged as a by-product, posing significant concerns. While some recent endeavors have been made to identify and mitigate different types of hallucination, there has been a limited emphasis on the nuanced categorization of hallucination and associated mitigation methods. To address this gap, we offer a fine-grained discourse on profiling hallucination based on its degree, orientation, and category, along with offering strategies for alleviation. As such, we define two overarching orientations of hallucination: (i) factual mirage (FM) and (ii) silver lining (SL). To provide a more comprehensive understanding, both orientations are further sub-categorized into intrinsic and extrinsic, with three degrees of severity - (i) mild, (ii) moderate, and (iii) alarming. We also meticulously categorize hallucination into six types: (i) acronym ambiguity, (ii) numeric nuisance, (iii) generated golem, (iv) virtual voice, (v) geographic erratum, and (vi) time wrap. Furthermore, we curate HallucInation eLiciTation (HILT), a publicly available dataset comprising of 75,000 samples generated using 15 contemporary LLMs along with human annotations for the aforementioned categories. Finally, to establish a method for quantifying and to offer a comparative spectrum that allows us to evaluate and rank LLMs based on their vulnerability to producing hallucinations, we propose Hallucination Vulnerability Index (HVI). We firmly believe that HVI holds significant value as a tool for the wider NLP community, with the potential to serve as a rubric in AI-related policy-making. In conclusion, we propose two solution strategies for mitigating hallucinations.

Analysis of Hallucination in LLMs

The paper presents an extensive analysis of hallucination phenomena within LLMs. Hallucination refers to the generation of content by LLMs that deviates from factual information, presenting significant challenges, especially as these models gain prominence. This work seeks to define, quantify, and mitigate the appearance of hallucinations in LLM outputs, introducing diagnostic tools and datasets beneficial to the research community for evaluating such issues.

Definition and Categories of Hallucination

The research defines two overarching orientations of hallucination: Factual Mirage (FM) and Silver Lining (SL). Factual Mirage pertains to hallucinations within text following factually accurate prompts, further subdivided into Intrinsic Factual Mirage (IFM) and Extrinsic Factual Mirage (EFM). Conversely, Silver Lining deals with the hallucination of factually incorrect prompts.

Furthermore, hallucinations are categorized into six distinct types:

  1. Acronym Ambiguity: Misinterpretation or incorrect expansion of acronyms.
  2. Numeric Nuisance: Errors in numeric data such as dates or quantities.
  3. Generated Golem: Creation of fictitious entities.
  4. Virtual Voice: Inaccurate attributions of quotes.
  5. Geographic Erratum: Incorrect location-related information.
  6. Time Wrap: Confusion regarding timelines or historical events.

Each type poses unique challenges and understanding these nuances is vital for mitigation.

HallucInation eLiciTation Dataset

The paper introduces the HallucInation eLiciTation (HILT) dataset, consisting of 75,000 samples generated by 15 different LLMs. This serves as a foundational resource, enabling systematic paper and comparison of hallucination tendencies across models. The dataset includes text that is human-annotated to capture the orientation, category, and severity of hallucination.

Hallucination Vulnerability Index (HVI)

A key contribution is the development of the Hallucination Vulnerability Index (HVI). HVI quantifies the propensity of different LLMs to generate hallucinated content. It serves as a comparative metric, offering a standardized approach to benchmark LLMs based on their hallucination tendencies. This index is poised to guide AI developers and policymakers by highlighting models that require stricter scrutiny or enhanced training protocols.

Mitigation Strategies

Two primary strategies are outlined for mitigating hallucinations:

  1. High Entropy Word Spotting and Replacement (ENTROPY\textsubscript{BB}): Involves identifying and replacing high-entropy words in generation with alternatives from models less prone to hallucination.
  2. Factuality Check of Sentences (FACTUALITY\textsubscript{GB}): Employs external databases to verify generated sentences, flagging those that fail verification for human review.

These methods adopt a blend of black-box and gray-box approaches, leveraging both model-internal probability assessments and external factual databases to address distinct hallucination types effectively.

Implications and Future Directions

The findings have critical implications for the deployment of LLMs in high-stakes applications, where accuracy is paramount. The paper sets the stage for future research, suggesting that continual updates to benchmarks like HILT and indices like HVI are necessary to keep pace with advancements in NLP. As LLMs evolve, refined frameworks for detecting and mitigating hallucinations will be instrumental in ensuring the reliability of AI outputs across diverse use cases.

Overall, this paper equips the AI research community with essential tools and insights to tackle hallucination, thus enhancing the trustworthiness of LLMs in real-world deployments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Vipula Rawte (11 papers)
  2. Swagata Chakraborty (3 papers)
  3. Agnibh Pathak (1 paper)
  4. Anubhav Sarkar (2 papers)
  5. S. M Towhidul Islam Tonmoy (9 papers)
  6. Aman Chadha (109 papers)
  7. Amit P. Sheth (14 papers)
  8. Amitava Das (44 papers)
Citations (88)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com