Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Why adversarial training can hurt robust accuracy (2203.02006v1)

Published 3 Mar 2022 in cs.LG, cs.CR, cs.CV, and stat.ML

Abstract: Machine learning classifiers with high test accuracy often perform poorly under adversarial attacks. It is commonly believed that adversarial training alleviates this issue. In this paper, we demonstrate that, surprisingly, the opposite may be true -- Even though adversarial training helps when enough data is available, it may hurt robust generalization in the small sample size regime. We first prove this phenomenon for a high-dimensional linear classification setting with noiseless observations. Our proof provides explanatory insights that may also transfer to feature learning models. Further, we observe in experiments on standard image datasets that the same behavior occurs for perceptible attacks that effectively reduce class information such as mask attacks and object corruptions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jacob Clarysse (3 papers)
  2. Julia Hörrmann (7 papers)
  3. Fanny Yang (38 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.