Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Backdoor Pre-trained Models Can Transfer to All (2111.00197v1)

Published 30 Oct 2021 in cs.CL, cs.CR, and cs.LG

Abstract: Pre-trained general-purpose LLMs have been a dominating component in enabling real-world NLP applications. However, a pre-trained model with backdoor can be a severe threat to the applications. Most existing backdoor attacks in NLP are conducted in the fine-tuning phase by introducing malicious triggers in the targeted class, thus relying greatly on the prior knowledge of the fine-tuning task. In this paper, we propose a new approach to map the inputs containing triggers directly to a predefined output representation of the pre-trained NLP models, e.g., a predefined output representation for the classification token in BERT, instead of a target label. It can thus introduce backdoor to a wide range of downstream tasks without any prior knowledge. Additionally, in light of the unique properties of triggers in NLP, we propose two new metrics to measure the performance of backdoor attacks in terms of both effectiveness and stealthiness. Our experiments with various types of triggers show that our method is widely applicable to different fine-tuning tasks (classification and named entity recognition) and to different models (such as BERT, XLNet, BART), which poses a severe threat. Furthermore, by collaborating with the popular online model repository Hugging Face, the threat brought by our method has been confirmed. Finally, we analyze the factors that may affect the attack performance and share insights on the causes of the success of our backdoor attack.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Lujia Shen (3 papers)
  2. Shouling Ji (136 papers)
  3. Xuhong Zhang (61 papers)
  4. Jinfeng Li (40 papers)
  5. Jing Chen (215 papers)
  6. Jie Shi (32 papers)
  7. Chengfang Fang (12 papers)
  8. Jianwei Yin (71 papers)
  9. Ting Wang (213 papers)
Citations (108)