Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transfer of Adversarial Robustness Between Perturbation Types (1905.01034v1)

Published 3 May 2019 in cs.LG, cs.AI, cs.CR, and stat.ML

Abstract: We study the transfer of adversarial robustness of deep neural networks between different perturbation types. While most work on adversarial examples has focused on $L_\infty$ and $L_2$-bounded perturbations, these do not capture all types of perturbations available to an adversary. The present work evaluates 32 attacks of 5 different types against models adversarially trained on a 100-class subset of ImageNet. Our empirical results suggest that evaluating on a wide range of perturbation sizes is necessary to understand whether adversarial robustness transfers between perturbation types. We further demonstrate that robustness against one perturbation type may not always imply and may sometimes hurt robustness against other perturbation types. In light of these results, we recommend evaluation of adversarial defenses take place on a diverse range of perturbation types and sizes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Daniel Kang (41 papers)
  2. Yi Sun (146 papers)
  3. Tom Brown (74 papers)
  4. Dan Hendrycks (63 papers)
  5. Jacob Steinhardt (88 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.