Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Truth is Universal: Robust Detection of Lies in LLMs (2407.12831v2)

Published 3 Jul 2024 in cs.CL and cs.AI

Abstract: LLMs have revolutionised natural language processing, exhibiting impressive human-like capabilities. In particular, LLMs are capable of "lying", knowingly outputting false statements. Hence, it is of interest and importance to develop methods to detect when LLMs lie. Indeed, several authors trained classifiers to detect LLM lies based on their internal model activations. However, other researchers showed that these classifiers may fail to generalise, for example to negated statements. In this work, we aim to develop a robust method to detect when an LLM is lying. To this end, we make the following key contributions: (i) We demonstrate the existence of a two-dimensional subspace, along which the activation vectors of true and false statements can be separated. Notably, this finding is universal and holds for various LLMs, including Gemma-7B, LLaMA2-13B, Mistral-7B and LLaMA3-8B. Our analysis explains the generalisation failures observed in previous studies and sets the stage for more robust lie detection; (ii) Building upon (i), we construct an accurate LLM lie detector. Empirically, our proposed classifier achieves state-of-the-art performance, attaining 94% accuracy in both distinguishing true from false factual statements and detecting lies generated in real-world scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. The Oxford Dictionary of English Grammar. Second edition. Oxford University Press, Inc., 2014.
  2. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  3. AI@Meta. Llama 3 model card. Github, 2024. URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md.
  4. Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644, 2016.
  5. The internal state of an llm knows when it’s lying. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 967–976, 2023.
  6. Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2023. https://transformer-circuits.pub/2023/monosemantic-features/index.html.
  7. Discovering latent knowledge in language models without supervision. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ETKGuby0hcs.
  8. Explore, establish, exploit: Red teaming language models from scratch. arXiv preprint arXiv:2306.09442, 2023.
  9. Sparse autoencoders find highly interpretable features in language models. arXiv preprint arXiv:2309.08600, 2023.
  10. Toy models of superposition. arXiv preprint arXiv:2209.10652, 2022.
  11. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.
  12. Thilo Hagendorff. Deception abilities emerged in large language models. Proceedings of the National Academy of Sciences, 121(24):e2317967121, 2024.
  13. The platonic representation hypothesis. arXiv preprint arXiv:2405.07987, 2024.
  14. Still no lie detector for language models: Probing empirical and conceptual roadblocks. Philosophical Studies, pages 1–27, 2024.
  15. Inference-time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems, 36, 2024.
  16. The geometry of truth: Emergent linear structure in large language model representations of true/false datasets. arXiv preprint arXiv:2310.06824, 2023.
  17. Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372, 2022.
  18. How to catch an ai liar: Lie detection in black-box llms by asking unrelated questions. In The Twelfth International Conference on Learning Representations, 2023.
  19. Ai deception: A survey of examples, risks, and potential solutions. Patterns, 5(5), 2024.
  20. Technical report: Large language models can strategically deceive their users when put under pressure. arXiv preprint arXiv:2311.07590, 2023.
  21. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  22. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Lennart Bürger (1 paper)
  2. Fred A. Hamprecht (37 papers)
  3. Boaz Nadler (45 papers)
Citations (1)

Summary

Insights into Robust Lie Detection in LLMs

The paper "Truth is Universal: Robust Detection of Lies in LLMs" by Bü1ger, Hamprecht, and Nadler presents a methodical approach to identifying deception within various LLMs. This research builds on the recognized capacity of LLMs to generate human-like text, including capabilities for deceptive outputs. Addressing the need for robust and consistent lie detection across these models is critical to ensuring the safety and reliability of LLM-generated content.

Contribution and Methodology

The authors propose a novel framework that leverages the internal activation patterns of LLMs to discern truthful from deceptive statements. Key to this framework is the identification of a two-dimensional subspace within the activation vectors of LLMs, labeled as the "truth subspace." The research identifies two principal directions within this subspace: the general truth direction (tGt_G) and a polarity-sensitive truth direction (tPt_P). The paper provides compelling empirical evidence that these directions help achieve high accuracy levels in distinguishing between true and false statements across diverse contexts. For instance, the authors report a striking performance measure of 94% accuracy for simple statements, and 95% for more nuanced real-world scenarios.

Key Findings

  1. Universal Truth Direction: The paper finds that tGt_G, the truth direction, holds across multiple LLMs, confirming a generalized approach to lie detection. This finding is significant as it suggests that models of varying structures and sizes, such as LLaMA and Gemma-7B, encode truthfulness in a comparable manner.
  2. Polarity Consideration: A substantial insight of the paper is the differentiation between truth and polarity-sensitive directions. Their research demonstrates that previous models, which focused predominantly on affirmative statements, failed to generalize to statements with negation. By introducing tPt_P, the authors effectively disentangle general truth from polarity considerations.
  3. Generalization Across Contexts: The method demonstrates excellent generalization capabilities across unseen statement types and contexts. Importantly, the research shows that by incorporating grammatical complexities such as logical conjunctions and disjunctions, the model maintains its interpretive capabilities.
  4. Application and Performance: The developed TTPD (Training of Truth and Polarity Direction) method supersedes previous models like Contrast Consistent Search (CCS) and standard Logistic Regression (LR) in detecting lies. TTPD showcases an adaptability and accuracy that are essential for real-world applications where deceptive practice could pose significant risks.

Implications and Future Directions

This work has profound implications for the future development of trustworthy AI systems. The identification of a universal truth direction suggests potential pathways to embed lie detection mechanisms directly into the LLMs without requiring extensive retraining or model-specific modifications. However, the research also acknowledges its limitations. The current framework primarily utilizes the general truth direction and envisions further enhancement through robust polarity direction estimations.

Continued exploration into the scaling problems and further analyses for varied real-world scenarios remain topics for future work. Extending this research to more substantial and multimodal datasets could uncover additional dimensions of truth representation and potentially improve the efficacy of such detection systems.

Moreover, a deeper dive into theoretical underpinning may foster a more robust understanding of the scaling issue, which appears crucial for the effective deployment of lie detection in complex and extended conversational contexts. As AI systems continue to evolve, ensuring their honesty and transparency will be crucial, thus making the contributions of this paper timely and relevant in ongoing AI safety discussions.