Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models (2407.04121v1)

Published 4 Jul 2024 in cs.CL and cs.AI

Abstract: LLMs have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems. However, a major drawback of LLMs is the issue of hallucination, where they generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences. In this paper, we propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers. RelD is trained on the constructed RelQA, a bilingual question-answering dialogue dataset along with answers generated by LLMs and a comprehensive set of metrics. Our experimental results demonstrate that the proposed RelD successfully detects hallucination in the answers generated by diverse LLMs. Moreover, it performs well in distinguishing hallucination in LLMs' generated answers from both in-distribution and out-of-distribution datasets. Additionally, we also conduct a thorough analysis of the types of hallucinations that occur and present valuable insights. This research significantly contributes to the detection of reliable answers generated by LLMs and holds noteworthy implications for mitigating hallucination in the future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yuyan Chen (20 papers)
  2. Qiang Fu (159 papers)
  3. Yichen Yuan (4 papers)
  4. Zhihao Wen (13 papers)
  5. Ge Fan (9 papers)
  6. Dayiheng Liu (75 papers)
  7. Dongmei Zhang (193 papers)
  8. Zhixu Li (43 papers)
  9. Yanghua Xiao (151 papers)
Citations (54)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets