Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models (2407.11282v3)

Published 15 Jul 2024 in cs.CL

Abstract: LLMs are employed across various high-stakes domains, where the reliability of their outputs is crucial. One commonly used method to assess the reliability of LLMs' responses is uncertainty estimation, which gauges the likelihood of their answers being correct. While many studies focus on improving the accuracy of uncertainty estimations for LLMs, our research investigates the fragility of uncertainty estimation and explores potential attacks. We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output. Specifically, the proposed backdoor attack method can alter an LLM's output probability distribution, causing the probability distribution to converge towards an attacker-predefined distribution while ensuring that the top-1 prediction remains unchanged. Our experimental results demonstrate that this attack effectively undermines the model's self-evaluation reliability in multiple-choice questions. For instance, we achieved a 100 attack success rate (ASR) across three different triggering strategies in four models. Further, we investigate whether this manipulation generalizes across different prompts and domains. This work highlights a significant threat to the reliability of LLMs and underscores the need for future defenses against such attacks. The code is available at https://github.com/qcznlp/uncertainty_attack.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Qingcheng Zeng (30 papers)
  2. Mingyu Jin (38 papers)
  3. Qinkai Yu (10 papers)
  4. Zhenting Wang (41 papers)
  5. Wenyue Hua (51 papers)
  6. Zihao Zhou (32 papers)
  7. Guangyan Sun (2 papers)
  8. Yanda Meng (18 papers)
  9. Shiqing Ma (56 papers)
  10. Qifan Wang (129 papers)
  11. Felix Juefei-Xu (93 papers)
  12. Kaize Ding (59 papers)
  13. Fan Yang (877 papers)
  14. Ruixiang Tang (44 papers)
  15. Yongfeng Zhang (163 papers)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com