Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning (2102.13624v2)

Published 26 Feb 2021 in cs.LG, cs.CR, and cs.CV

Abstract: Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time. A variety of defenses against this threat model have been proposed, but each suffers from at least one of the following flaws: they are easily overcome by adaptive attacks, they severely reduce testing performance, or they cannot generalize to diverse data poisoning threat models. Adversarial training, and its variants, are currently considered the only empirically strong defense against (inference-time) adversarial attacks. In this work, we extend the adversarial training framework to defend against (training-time) data poisoning, including targeted and backdoor attacks. Our method desensitizes networks to the effects of such attacks by creating poisons during training and injecting them into training batches. We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses such as DP-SGD or (evasion) adversarial training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jonas Geiping (73 papers)
  2. Liam Fowl (25 papers)
  3. Gowthami Somepalli (20 papers)
  4. Micah Goldblum (96 papers)
  5. Michael Moeller (62 papers)
  6. Tom Goldstein (226 papers)
Citations (35)
X Twitter Logo Streamline Icon: https://streamlinehq.com