Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking the Trigger of Backdoor Attack (2004.04692v3)

Published 9 Apr 2020 in cs.CR, cs.CV, and cs.LG

Abstract: Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples. Currently, most of existing backdoor attacks adopted the setting of \emph{static} trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area. In this paper, we revisit this attack paradigm by analyzing the characteristics of the static trigger. We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training. We further explore how to utilize this property for backdoor defense, and discuss how to alleviate such vulnerability of existing attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yiming Li (199 papers)
  2. Tongqing Zhai (4 papers)
  3. Baoyuan Wu (107 papers)
  4. Yong Jiang (194 papers)
  5. Zhifeng Li (74 papers)
  6. Shutao Xia (25 papers)
Citations (134)