Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Instruction Backdoor Attacks Against Customized LLMs (2402.09179v3)

Published 14 Feb 2024 in cs.CR and cs.LG

Abstract: The increasing demand for customized LLMs has led to the development of solutions like GPTs. These solutions facilitate tailored LLM creation via natural language prompts without coding. However, the trustworthiness of third-party custom versions of LLMs remains an essential concern. In this paper, we propose the first instruction backdoor attacks against applications integrated with untrusted customized LLMs (e.g., GPTs). Specifically, these attacks embed the backdoor into the custom version of LLMs by designing prompts with backdoor instructions, outputting the attacker's desired result when inputs contain the pre-defined triggers. Our attack includes 3 levels of attacks: word-level, syntax-level, and semantic-level, which adopt different types of triggers with progressive stealthiness. We stress that our attacks do not require fine-tuning or any modification to the backend LLMs, adhering strictly to GPTs development guidelines. We conduct extensive experiments on 6 prominent LLMs and 5 benchmark text classification datasets. The results show that our instruction backdoor attacks achieve the desired attack performance without compromising utility. Additionally, we propose two defense strategies and demonstrate their effectiveness in reducing such attacks. Our findings highlight the vulnerability and the potential risks of LLM customization such as GPTs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Rui Zhang (1138 papers)
  2. Hongwei Li (97 papers)
  3. Rui Wen (48 papers)
  4. Wenbo Jiang (23 papers)
  5. Yuan Zhang (331 papers)
  6. Michael Backes (157 papers)
  7. Yun Shen (61 papers)
  8. Yang Zhang (1129 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com

HackerNews

  1. OpenAI's GPTs can be abused (2 points, 0 comments)