Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics (2404.04722v1)

Published 6 Apr 2024 in cs.CL, cs.CR, and cs.SE

Abstract: Despite tremendous advancements in LLMs over recent years, a notably urgent challenge for their practical deployment is the phenomenon of hallucination, where the model fabricates facts and produces non-factual statements. In response, we propose PoLLMgraph, a Polygraph for LLMs, as an effective model-based white-box detection and forecasting approach. PoLLMgraph distinctly differs from the large body of existing research that concentrates on addressing such challenges through black-box evaluations. In particular, we demonstrate that hallucination can be effectively detected by analyzing the LLM's internal state transition dynamics during generation via tractable probabilistic models. Experimental results on various open-source LLMs confirm the efficacy of PoLLMgraph, outperforming state-of-the-art methods by a considerable margin, evidenced by over 20% improvement in AUC-ROC on common benchmarking datasets like TruthfulQA. Our work paves a new way for model-based white-box analysis of LLMs, motivating the research community to further explore, understand, and refine the intricate dynamics of LLM behaviors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Derui Zhu (11 papers)
  2. Dingfan Chen (13 papers)
  3. Qing Li (430 papers)
  4. Zongxiong Chen (9 papers)
  5. Lei Ma (195 papers)
  6. Jens Grossklags (21 papers)
  7. Mario Fritz (160 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com