Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions (2106.07214v4)

Published 14 Jun 2021 in cs.LG and cs.CR

Abstract: Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time. Although backdoor attacks have been demonstrated in a variety of settings and against different models, the factors affecting their effectiveness are still not well understood. In this work, we provide a unifying framework to study the process of backdoor learning under the lens of incremental learning and influence functions. We show that the effectiveness of backdoor attacks depends on: (i) the complexity of the learning algorithm, controlled by its hyperparameters; (ii) the fraction of backdoor samples injected into the training set; and (iii) the size and visibility of the backdoor trigger. These factors affect how fast a model learns to correlate the presence of the backdoor trigger with the target class. Our analysis unveils the intriguing existence of a region in the hyperparameter space in which the accuracy on clean test samples is still high while backdoor attacks are ineffective, thereby suggesting novel criteria to improve existing defenses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Antonio Emanuele CinĂ  (18 papers)
  2. Kathrin Grosse (22 papers)
  3. Sebastiano Vascon (20 papers)
  4. Ambra Demontis (34 papers)
  5. Battista Biggio (81 papers)
  6. Fabio Roli (77 papers)
  7. Marcello Pelillo (53 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.