Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Effectiveness of Adversarial Training against Backdoor Attacks (2202.10627v1)

Published 22 Feb 2022 in cs.LG and cs.CR

Abstract: DNNs' demand for massive data forces practitioners to collect data from the Internet without careful check due to the unacceptable cost, which brings potential risks of backdoor attacks. A backdoored model always predicts a target class in the presence of a predefined trigger pattern, which can be easily realized via poisoning a small amount of data. In general, adversarial training is believed to defend against backdoor attacks since it helps models to keep their prediction unchanged even if we perturb the input image (as long as within a feasible range). Unfortunately, few previous studies succeed in doing so. To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters. For instance, adversarial training with spatial adversarial examples provides notable robustness against commonly-used patch-based backdoor attacks. We further propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yinghua Gao (2 papers)
  2. Dongxian Wu (12 papers)
  3. Jingfeng Zhang (66 papers)
  4. Guanhao Gan (2 papers)
  5. Shu-Tao Xia (171 papers)
  6. Gang Niu (125 papers)
  7. Masashi Sugiyama (286 papers)
Citations (20)