Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
164 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

[Re] Double Sampling Randomized Smoothing (2306.15221v1)

Published 27 Jun 2023 in cs.LG, cs.CR, math.ST, and stat.TH

Abstract: This paper is a contribution to the reproducibility challenge in the field of machine learning, specifically addressing the issue of certifying the robustness of neural networks (NNs) against adversarial perturbations. The proposed Double Sampling Randomized Smoothing (DSRS) framework overcomes the limitations of existing methods by using an additional smoothing distribution to improve the robustness certification. The paper provides a clear manifestation of DSRS for a generalized family of Gaussian smoothing and a computationally efficient method for implementation. The experiments on MNIST and CIFAR-10 demonstrate the effectiveness of DSRS, consistently certifying larger robust radii compared to other methods. Also various ablations studies are conducted to further analyze the hyperparameters and effect of adversarial training methods on the certified radius by the proposed framework.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. L. Li, J. Zhang, T. Xie, and B. Li, “Double sampling randomized smoothing,” 2022.
  2. G. Yang, T. Duan, J. E. Hu, H. Salman, I. Razenshteyn, and J. Li, “Randomized smoothing of all shapes and sizes,” 2020.
  3. J. M. Cohen, E. Rosenfeld, and J. Z. Kolter, “Certified adversarial robustness via randomized smoothing,” 2019.
  4. H. Salman, G. Yang, J. Li, P. Zhang, H. Zhang, I. Razenshteyn, and S. Bubeck, “Provably robust deep learning via adversarially trained smoothed classifiers,” 2019.
  5. B. Li, C. Chen, W. Wang, and L. Carin, “Certified adversarial robustness with additive noise,” 2018.
  6. J. Z. Kolter and E. Wong, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” CoRR, vol. abs/1711.00851, 2017.
  7. M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, “Certified robustness to adversarial examples with differential privacy,” 2018.
  8. A. Blum, T. Dick, N. Manoj, and H. Zhang, “Random smoothing might be unable to certify ℓ∞subscriptℓ\ell_{\infty}roman_ℓ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT robustness for high-dimensional images,” 2020.
  9. A. Kumar, A. Levine, T. Goldstein, and S. Feizi, “Curse of dimensionality on randomized smoothing for certifiable robustness,” 2020.
  10. J. Hayes, “Extensions and limitations of randomized smoothing for robustness guarantees,” 2020.
  11. L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
  12. A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (canadian institute for advanced research),”
  13. D. Zhang, M. Ye, C. Gong, Z. Zhu, and Q. Liu, “Black-box certification with randomized smoothing: A functional optimization based framework,” 2020.
  14. J. Jeong and J. Shin, “Consistency regularization for certified robustness of smoothed classifiers,” 2020.
  15. J. Jeong, S. Park, M. Kim, H.-C. Lee, D. Kim, and J. Shin, “Smoothmix: Training confidence-calibrated smoothed classifiers for certified robustness,” 2021.
  16. R. Zhai, C. Dan, D. He, H. Zhang, B. Gong, P. Ravikumar, C.-J. Hsieh, and L. Wang, “Macer: Attack-free and scalable robust training via maximizing certified radius,” 2020.
  17. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” 2017.
  18. A. Lamb, V. Verma, K. Kawaguchi, A. Matyasko, S. Khosla, J. Kannala, and Y. Bengio, “Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy,” 2019.
  19. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.

Summary

We haven't generated a summary for this paper yet.