Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Smoothed Inference for Adversarially-Trained Models (1911.07198v2)

Published 17 Nov 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Deep neural networks are known to be vulnerable to adversarial attacks. Current methods of defense from such attacks are based on either implicit or explicit regularization, e.g., adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered in the sample, has been shown to guarantee the performance of a classifier subject to bounded perturbations of the input. In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks. The proposed technique can be applied on top of any existing adversarial defense, but works particularly well with the randomized approaches. We examine its performance on common white-box (PGD) and black-box (transfer and NAttack) attacks on CIFAR-10 and CIFAR-100, substantially outperforming previous art for most scenarios and comparable on others. For example, we achieve 60.4% accuracy under a PGD attack on CIFAR-10 using ResNet-20, outperforming previous art by 11.7%. Since our method is based on sampling, it lends itself well for trading-off between the model inference complexity and its performance. A reference implementation of the proposed techniques is provided at https://github.com/yanemcovsky/SIAM

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yaniv Nemcovsky (6 papers)
  2. Evgenii Zheltonozhskii (22 papers)
  3. Chaim Baskin (48 papers)
  4. Brian Chmiel (15 papers)
  5. Maxim Fishman (5 papers)
  6. Alex M. Bronstein (58 papers)
  7. Avi Mendelson (25 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.