Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models (2110.02467v1)

Published 6 Oct 2021 in cs.CL and cs.AI

Abstract: Pre-trained NLP models can be easily adapted to a variety of downstream language tasks. This significantly accelerates the development of LLMs. However, NLP models have been shown to be vulnerable to backdoor attacks, where a pre-defined trigger word in the input text causes model misprediction. Previous NLP backdoor attacks mainly focus on some specific tasks. This makes those attacks less general and applicable to other kinds of NLP models and tasks. In this work, we propose \Name, the first task-agnostic backdoor attack against the pre-trained NLP models. The key feature of our attack is that the adversary does not need prior information about the downstream tasks when implanting the backdoor to the pre-trained model. When this malicious model is released, any downstream models transferred from it will also inherit the backdoor, even after the extensive transfer learning process. We further design a simple yet effective strategy to bypass a state-of-the-art defense. Experimental results indicate that our approach can compromise a wide range of downstream NLP tasks in an effective and stealthy way.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kangjie Chen (16 papers)
  2. Yuxian Meng (37 papers)
  3. Xiaofei Sun (36 papers)
  4. Shangwei Guo (32 papers)
  5. Tianwei Zhang (199 papers)
  6. Jiwei Li (137 papers)
  7. Chun Fan (16 papers)
Citations (95)