Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples (2310.01469v3)

Published 2 Oct 2023 in cs.CL and cs.AI

Abstract: LLMs, including GPT-3.5, LLaMA, and PaLM, seem to be knowledgeable and able to adapt to many tasks. However, we still cannot completely trust their answers, since LLMs suffer from \textbf{hallucination}\textemdash fabricating non-existent facts, deceiving users with or without their awareness. However, the reasons for their existence and pervasiveness remain unclear. In this paper, we demonstrate that nonsensical prompts composed of random tokens can also elicit the LLMs to respond with hallucinations. Moreover, we provide both theoretical and experimental evidence that transformers can be manipulated to produce specific pre-define tokens by perturbing its input sequence. This phenomenon forces us to revisit that \emph{hallucination may be another view of adversarial examples}, and it shares similar characteristics with conventional adversarial examples as a basic property of LLMs. Therefore, we formalize an automatic hallucination triggering method as the \textit{hallucination attack} in an adversarial way. Finally, we explore the basic properties of attacked adversarial prompts and propose a simple yet effective defense strategy. Our code is released on GitHub\footnote{https://github.com/PKU-YuanGroup/Hallucination-Attack}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
  2. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023.
  3. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
  4. Trapping llm hallucinations using tagged context prompts. arXiv preprint arXiv:2306.06085, 2023.
  5. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  6. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019.
  7. Hallucinations in neural machine translation. 2018.
  8. Minhyeok Lee. A mathematical investigation of hallucination and creativity in gpt models. Mathematics, 11(10):2320, 2023.
  9. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023.
  10. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896, 2023.
  11. Sources of hallucination by large language models on inference tasks. arXiv preprint arXiv:2305.14552, 2023.
  12. OpenAI. Gpt-4 technical report, 2023.
  13. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
  14. Improving language understanding by generative pre-training. 2018.
  15. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
  16. Question decomposition improves the faithfulness of model-generated reasoning. arXiv preprint arXiv:2307.11768, 2023.
  17. Investigating the factual knowledge boundary of large language models with retrieval augmentation. arXiv preprint arXiv:2307.11019, 2023.
  18. Adversarial training for free! Advances in Neural Information Processing Systems, 32, 2019.
  19. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
  20. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
  21. On adaptive attacks to adversarial example defenses. Advances in neural information processing systems, 33:1633–1645, 2020.
  22. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  23. Universal adversarial triggers for attacking and analyzing nlp. arXiv preprint arXiv:1908.07125, 2019.
  24. Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483, 2023.
  25. Enhancing adversarial defense by k-winners-take-all. arXiv preprint arXiv:1905.10510, 2019.
  26. Enhancing adversarial defense by k-winners-take-all. In International Conference on Learning Representations, 2020.
  27. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023.
  28. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
Citations (110)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com