Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Promoting Robustness of Randomized Smoothing: Two Cost-Effective Approaches (2310.07780v1)

Published 11 Oct 2023 in cs.LG

Abstract: Randomized smoothing has recently attracted attentions in the field of adversarial robustness to provide provable robustness guarantees on smoothed neural network classifiers. However, existing works show that vanilla randomized smoothing usually does not provide good robustness performance and often requires (re)training techniques on the base classifier in order to boost the robustness of the resulting smoothed classifier. In this work, we propose two cost-effective approaches to boost the robustness of randomized smoothing while preserving its clean performance. The first approach introduces a new robust training method AdvMacerwhich combines adversarial training and robustness certification maximization for randomized smoothing. We show that AdvMacer can improve the robustness performance of randomized smoothing classifiers compared to SOTA baselines, while being 3x faster to train than MACER baseline. The second approach introduces a post-processing method EsbRS which greatly improves the robustness certificate based on building model ensembles. We explore different aspects of model ensembles that has not been studied by prior works and propose a novel design methodology to further improve robustness of the ensemble based on our theoretical analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Data dependent randomized smoothing. In Uncertainty in Artificial Intelligence, pages 64–74. PMLR, 2022.
  2. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. arXiv preprint arXiv:2012.09816, 2020.
  3. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pages 274–283. PMLR, 2018.
  4. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pages 1310–1320. PMLR, 2019.
  5. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  6. Ancer: Anisotropic certification via sample-wise volume maximization. arXiv preprint arXiv:2107.04570, 2021.
  7. Explaining and harnessing adversarial examples. In ICLR, 2015.
  8. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  9. Boosting randomized smoothing with variance reduced classifiers. In International Conference on Learning Representations.
  10. Smoothmix: Training confidence-calibrated smoothed classifiers for certified robustness. Advances in Neural Information Processing Systems, 34, 2021.
  11. Consistency regularization for certified robustness of smoothed classifiers. Advances in Neural Information Processing Systems, 33:10558–10570, 2020.
  12. Learning multiple layers of features from tiny images. 2009.
  13. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pages 656–672, 2019.
  14. Certified adversarial robustness with additive noise. In NeurIPS, 2019.
  15. Attacking object detectors via imperceptible patches on background. arXiv preprint arXiv:1809.05966, 2018.
  16. Enhancing certified robustness via smoothed weighted ensembling. arXiv preprint arXiv:2005.09363, 2020.
  17. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  18. Reading digits in natural images with unsupervised feature learning. 2011.
  19. Provably robust deep learning via adversarially trained smoothed classifiers. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 11292–11303, 2019.
  20. Intriguing properties of neural networks. ICLR, 2014.
  21. On the certified robustness for ensemble models and beyond. In International Conference on Learning Representations, 2022.
  22. Macer: Attack-free and scalable robust training via maximizing certified radius. In International Conference on Learning Representations, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Linbo Liu (14 papers)
  2. Trong Nghia Hoang (32 papers)
  3. Lam M. Nguyen (58 papers)
  4. Tsui-Wei Weng (51 papers)

Summary

We haven't generated a summary for this paper yet.