Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can Editing LLMs Inject Harm? (2407.20224v3)

Published 29 Jul 2024 in cs.CL

Abstract: Knowledge editing has been increasingly adopted to correct the false or outdated knowledge in LLMs. Meanwhile, one critical but under-explored question is: can knowledge editing be used to inject harm into LLMs? In this paper, we propose to reformulate knowledge editing as a new type of safety threat for LLMs, namely Editing Attack, and conduct a systematic investigation with a newly constructed dataset EditAttack. Specifically, we focus on two typical safety risks of Editing Attack including Misinformation Injection and Bias Injection. For the risk of misinformation injection, we first categorize it into commonsense misinformation injection and long-tail misinformation injection. Then, we find that editing attacks can inject both types of misinformation into LLMs, and the effectiveness is particularly high for commonsense misinformation injection. For the risk of bias injection, we discover that not only can biased sentences be injected into LLMs with high effectiveness, but also one single biased sentence injection can cause a bias increase in general outputs of LLMs, which are even highly irrelevant to the injected sentence, indicating a catastrophic impact on the overall fairness of LLMs. Then, we further illustrate the high stealthiness of editing attacks, measured by their impact on the general knowledge and reasoning capacities of LLMs, and show the hardness of defending editing attacks with empirical evidence. Our discoveries demonstrate the emerging misuse risks of knowledge editing techniques on compromising the safety alignment of LLMs and the feasibility of disseminating misinformation or bias with LLMs as new channels.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Canyu Chen (26 papers)
  2. Baixiang Huang (8 papers)
  3. Zekun Li (73 papers)
  4. Zhaorun Chen (28 papers)
  5. Shiyang Lai (9 papers)
  6. Xiongxiao Xu (10 papers)
  7. Jia-Chen Gu (42 papers)
  8. Jindong Gu (101 papers)
  9. Huaxiu Yao (103 papers)
  10. Chaowei Xiao (110 papers)
  11. Xifeng Yan (52 papers)
  12. William Yang Wang (254 papers)
  13. Philip Torr (172 papers)
  14. Dawn Song (229 papers)
  15. Kai Shu (88 papers)
Citations (9)