Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing (2309.16883v4)

Published 28 Sep 2023 in cs.LG and stat.ML

Abstract: Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks. The certified radius in this context is a crucial indicator of the robustness of models. However how to design an efficient classifier with an associated certified radius? Randomized smoothing provides a promising framework by relying on noise injection into the inputs to obtain a smoothed and robust classifier. In this paper, we first show that the variance introduced by the Monte-Carlo sampling in the randomized smoothing procedure estimate closely interacts with two other important properties of the classifier, \textit{i.e.} its Lipschitz constant and margin. More precisely, our work emphasizes the dual impact of the Lipschitz constant of the base classifier, on both the smoothed classifier and the empirical variance. To increase the certified robust radius, we introduce a different way to convert logits to probability vectors for the base classifier to leverage the variance-margin trade-off. We leverage the use of Bernstein's concentration inequality along with enhanced Lipschitz bounds for randomized smoothing. Experimental results show a significant improvement in certified accuracy compared to current state-of-the-art methods. Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Sorting out lipschitz function approximation. In International Conference on Machine Learning, 2019.
  2. On lipschitz regularization of convolutional layers using toeplitz matrix theory. AAAI Conference on Artificial Intelligence, 2021.
  3. A unified algebraic perspective on lipschitz neural networks. In International Conference on Learning Representations, 2023.
  4. Pay attention to your loss : understanding misconceptions about lipschitz neural networks. In Advances in Neural Information Processing Systems, 2022.
  5. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013.
  6. (Certified!!) adversarial robustness for free! In International Conference on Learning Representations, 2023.
  7. Your diffusion model is secretly a certifiably robust classifier. In arXiv, 2024.
  8. Parseval Networks: Improving Robustness to Adversarial Examples. In International Conference on Machine Learning, 2017.
  9. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 2019.
  10. Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration. In International Conference on Machine Learning, 2023.
  11. Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection. In arXiv, 2023.
  12. Boosting randomized smoothing with variance reduced classifiers. In International Conference on Learning Representations, 2022.
  13. Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds. In Advances in Neural Information Processing Systems, 2021.
  14. On Controllable Sparse Alternatives to Softmax. In Advances in Neural Information Processing Systems, 2018.
  15. Certified robustness to adversarial examples with differential privacy. In IEEE symposium on security and privacy (SP), 2019.
  16. Certifiably robust interpretation in deep learning. In arXiv, 2020.
  17. Second-Order Adversarial Attack and Certifiable Robustness. In International Conference on Learning Representations, 2018.
  18. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International Conference on Machine Learning, 2016.
  19. Pascal Massart. Concentration inequalities and model selection. In École d’été de probabilités de Saint-Flour, 2007.
  20. Empirical bernstein bounds and sample variance penalization. Conference on Learning Theory, 2009.
  21. A dynamical system perspective for lipschitz neural networks. In International Conference on Machine Learning, 2022.
  22. Adversarial robustness of sparse local lipschitz predictors. SIAM Journal on Mathematics of Data Science, 5:920–948, 2023.
  23. Understanding Noise-Augmented Training for Randomized Smoothing. Transactions on Machine Learning Research, 2023.
  24. Smoothed embeddings for certified few-shot learning. In Advances in Neural Information Processing Systems, 2022.
  25. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 2019.
  26. Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 2020.
  27. Fantastic four: Differentiable and efficient bounds on singular values of convolution layers. In International Conference on Learning Representations, 2021a.
  28. Skew orthogonal convolutions. In International Conference on Machine Learning, 2021b.
  29. Charles M. Stein. Estimation of the mean of a multivariate normal distribution. The Annals of Statistics, 9(6):1135–1151, 1981.
  30. Intriguing properties of neural networks. International Conference on Learning Representations, 2013.
  31. Orthogonalizing convolutional layers with the cayley transform. In International Conference on Learning Representations, 2021.
  32. Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks. Advances in neural information processing systems, 2018.
  33. Improving l1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints. In International Conference on Machine Learning, 2023.
  34. Direct parameterization of lipschitz-bounded deep networks. In International Conference on Machine Learning, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Blaise Delattre (9 papers)
  2. Alexandre Araujo (23 papers)
  3. Quentin Barthélemy (18 papers)
  4. Alexandre Allauzen (26 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.