Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UOR: Universal Backdoor Attacks on Pre-trained Language Models (2305.09574v2)

Published 16 May 2023 in cs.CL, cs.AI, and cs.CR

Abstract: Backdoors implanted in pre-trained LLMs (PLMs) can be transferred to various downstream tasks, which exposes a severe security threat. However, most existing backdoor attacks against PLMs are un-targeted and task-specific. Few targeted and task-agnostic methods use manually pre-defined triggers and output representations, which prevent the attacks from being more effective and general. In this paper, we first summarize the requirements that a more threatening backdoor attack against PLMs should satisfy, and then propose a new backdoor attack method called UOR, which breaks the bottleneck of the previous approach by turning manual selection into automatic optimization. Specifically, we define poisoned supervised contrastive learning which can automatically learn the more uniform and universal output representations of triggers for various PLMs. Moreover, we use gradient search to select appropriate trigger words which can be adaptive to different PLMs and vocabularies. Experiments show that our method can achieve better attack performance on various text classification tasks compared to manual methods. Further, we tested our method on PLMs with different architectures, different usage paradigms, and more difficult tasks, which demonstrated the universality of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wei Du (124 papers)
  2. Peixuan Li (13 papers)
  3. Boqun Li (1 paper)
  4. Haodong Zhao (14 papers)
  5. Gongshen Liu (37 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.