Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mechanistic Understanding and Mitigation of Language Model Non-Factual Hallucinations (2403.18167v2)

Published 27 Mar 2024 in cs.CL and cs.AI

Abstract: State-of-the-art LLMs (LMs) sometimes generate non-factual hallucinations that misalign with world knowledge. To explore the mechanistic causes of these hallucinations, we create diagnostic datasets with subject-relation queries and adapt interpretability methods to trace hallucinations through internal model representations. We discover two general and distinct mechanistic causes of hallucinations shared across LMs (Llama-2, Pythia, GPT-J): 1) knowledge enrichment hallucinations: insufficient subject attribute knowledge in lower layer MLPs, and 2) answer extraction hallucinations: failure to select the correct object attribute in upper layer attention heads. We also found these two internal mechanistic causes of hallucinations are reflected in external manifestations. Based on insights from our mechanistic analysis, we propose a novel hallucination mitigation method through targeted restoration of the LM's internal fact recall pipeline, demonstrating superior performance compared to baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. Towards tracing knowledge in language models back to the training data. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2429–2446.
  2. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397–2430. PMLR.
  3. Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3340–3354, Dublin, Ireland. Association for Computational Linguistics.
  4. Learning with rejection for abstractive text summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9768–9780, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  5. Lm vs lm: Detecting factual errors via cross examination. arXiv preprint arXiv:2305.13281.
  6. Towards automated circuit discovery for mechanistic interpretability. arXiv preprint arXiv:2304.14997.
  7. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8493–8502.
  8. Analyzing transformers in embedding space. In Annual Meeting of the Association for Computational Linguistics.
  9. Editing factual knowledge in language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6491–6506.
  10. A survey of natural language generation. ACM Computing Surveys, 55(8):1–38.
  11. Halo: Estimation and reduction of hallucinations in open-source weak large language models. arXiv preprint arXiv:2308.11764.
  12. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics, 9:1012–1031.
  13. A mathematical framework for transformer circuits. Transformer Circuits Thread. Https://transformer-circuits.pub/2021/framework/index.html.
  14. Causal analysis of syntactic agreement mechanisms in neural language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1828–1843.
  15. Dissecting recall of factual associations in auto-regressive language models. arXiv preprint arXiv:2304.14767.
  16. Lm-debugger: An interactive tool for inspection and intervention in transformer-based language models. In Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 12–21.
  17. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 30–45, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  18. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5484–5495.
  19. Patchscope: A unifying framework for inspecting hidden representations of language models. arXiv preprint arXiv:2401.06102.
  20. Transformer language models without positional encodings still learn positional information. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1382–1390, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  21. Linearity of relation decoding in transformer language models. arXiv preprint arXiv:2308.09124.
  22. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38.
  23. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438.
  24. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169.
  25. Backward lens: Projecting language model gradients into the vocabulary space. arXiv preprint arXiv:2402.12865.
  26. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466.
  27. Inference-time intervention: Eliciting truthful answers from a language model. arXiv preprint arXiv:2306.03341.
  28. Truthfulqa: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252.
  29. Influence patterns for explaining information flow in bert. Advances in Neural Information Processing Systems, 34:4461–4474.
  30. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896.
  31. Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372.
  32. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229.
  33. Fast model editing at scale. In International Conference on Learning Representations.
  34. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. arXiv preprint arXiv:2305.15852.
  35. Neel Nanda. 2023. Attribution patching: Activation patching at industrial scale. URL: https://www. neelnanda. io/mechanistic-interpretability/attribution-patching.
  36. Progress measures for grokking via mechanistic interpretability. In The Eleventh International Conference on Learning Representations.
  37. Nostalgebraist. 2020. Interpreting gpt: the logit lens. LESSWRONG.
  38. Chris Olah. 2022. Mechanistic interpretability, variables, and the importance of interpretable bases. Transformer Circuits Thread. Https://transformer-circuits.pub/2022/mech-interp-essay/index.html.
  39. Judea Pearl. 2001. Direct and indirect effects. In Proc. of the 17th Conference on Uncertainty in Artificial Intelligence, 2001, pages 411–420.
  40. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473.
  41. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
  42. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research.
  43. A mechanistic interpretation of arithmetic reasoning in language models using causal mediation analysis. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7035–7052.
  44. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR.
  45. Reducing hallucinations in neural machine translation with feature attribution. arXiv preprint arXiv:2211.09878.
  46. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  47. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. arXiv preprint arXiv:2305.04388.
  48. A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation. arXiv preprint arXiv:2307.03987.
  49. Attention is all you need. Advances in neural information processing systems, 30.
  50. Investigating gender bias in language models using causal mediation analysis. Advances in neural information processing systems, 33:12388–12401.
  51. BERTnesia: Investigating the capture and forgetting of knowledge in BERT. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 174–183, Online. Association for Computational Linguistics.
  52. Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax.
  53. Interpretability in the wild: a circuit for indirect object identification in gpt-2 small. In The Eleventh International Conference on Learning Representations.
  54. Yijun Xiao and William Yang Wang. 2021. On hallucination and predictive uncertainty in conditional language generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2734–2744, Online. Association for Computational Linguistics.
  55. Characterizing mechanisms for factual recall in language models. arXiv preprint arXiv:2310.15910.
  56. Attention satisfies: A constraint-satisfaction lens on factual errors of language models. arXiv preprint arXiv:2309.15098.
  57. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534.
  58. Siren’s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219.
  59. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206.
  60. Detecting hallucinated content in conditional neural sequence generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1393–1404.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Lei Yu (234 papers)
  2. Meng Cao (107 papers)
  3. Jackie Chi Kit Cheung (57 papers)
  4. Yue Dong (61 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com