Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Large Certified Radius in Randomized Smoothing using Quasiconcave Optimization (2302.00209v2)

Published 1 Feb 2023 in cs.LG, cs.CV, and math.OC

Abstract: Randomized smoothing is currently the state-of-the-art method that provides certified robustness for deep neural networks. However, due to its excessively conservative nature, this method of incomplete verification often cannot achieve an adequate certified radius on real-world datasets. One way to obtain a larger certified radius is to use an input-specific algorithm instead of using a fixed Gaussian filter for all data points. Several methods based on this idea have been proposed, but they either suffer from high computational costs or gain marginal improvement in certified radius. In this work, we show that by exploiting the quasiconvex problem structure, we can find the optimal certified radii for most data points with slight computational overhead. This observation leads to an efficient and effective input-specific randomized smoothing algorithm. We conduct extensive experiments and empirical analysis on CIFAR-10 and ImageNet. The results show that the proposed method significantly enhances the certified radii with low computational overhead.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Data dependent randomized smoothing. In Uncertainty in Artificial Intelligence (UAI), 64–74. PMLR.
  2. Certified robustness via locally biased randomized smoothing. In Learning for Dynamics and Control Conference, 207–220. PMLR.
  3. Convex optimization. Cambridge university press.
  4. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE security and privacy workshops (SPW), 1–7. IEEE.
  5. Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and Accuracy. arXiv preprint arXiv:2103.04436.
  6. Input-specific robustness certification for randomized smoothing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 6295–6303.
  7. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning (ICML), 1310–1320. PMLR.
  8. Shield: Fast, practical defense and vaccination for deep learning using jpeg compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 196–204.
  9. Ehlers, R. 2017. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis, 269–286. Springer.
  10. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  11. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715.
  12. SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness. In ICML 2021 Workshop on Adversarial Machine Learning.
  13. Reluplex: An efficient SMT solver for verifying deep neural networks. In International conference on computer aided verification, 97–117. Springer.
  14. Learning multiple layers of features from tiny images. Citeseer.
  15. Certifying confidence via randomized smoothing. Advances in Neural Information Processing Systems (NeurIPS), 33: 5165–5177.
  16. Curse of dimensionality on randomized smoothing for certifiable robustness. In International Conference on Machine Learning (ICML), 5458–5467. PMLR.
  17. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), 656–672. IEEE.
  18. Tight Second-Order Certificates for Randomized Smoothing. arXiv preprint arXiv:2010.10549.
  19. Improved, Deterministic Smoothing for L_1 Certified Robustness. In International Conference on Machine Learning (ICML), 6254–6264. PMLR.
  20. Sok: Certified robustness for deep neural networks. In 2023 IEEE Symposium on Security and Privacy (SP), 1289–1310. IEEE.
  21. Double Sampling Randomized Smoothing. In International Conference on Machine Learning (ICML), 13163–13208. PMLR.
  22. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
  23. Hidden cost of randomized smoothing. In International Conference on Artificial Intelligence and Statistics, 4033–4041. PMLR.
  24. Certified Training: Small Boxes are All You Need. In International Conference on Learning Representations (ICLR).
  25. Nesterov, Y. 2018. Lectures on convex optimization, volume 137. Springer.
  26. Imagenet large scale visual recognition challenge. International journal of computer vision (IJCV), 115(3): 211–252.
  27. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems (NeurIPS), 32.
  28. Defense-GAN: Protecting classifiers against adversarial attacks using generative models. In International Conference on Learning Representations (ICLR).
  29. Adversarial training for free! Advances in Neural Information Processing Systems (NeurIPS), 32.
  30. Skew orthogonal convolutions. In International Conference on Machine Learning (ICML), 9756–9766. PMLR.
  31. Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100. In International Conference on Learning Representations (ICLR).
  32. Intriguing Properties of Input-dependent Randomized Smoothing. arXiv preprint arXiv:2110.05365.
  33. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).
  34. Evaluating Robustness of Neural Networks with Mixed Integer Programming. In International Conference on Learning Representations (ICLR).
  35. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696.
  36. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning (ICML), 5276–5285. PMLR.
  37. Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994.
  38. Lot: Layer-wise orthogonal training on improving l2 certified robustness. Advances in Neural Information Processing Systems (NeurIPS), 35: 18904–18915.
  39. Randomized smoothing of all shapes and sizes. In International Conference on Machine Learning (ICML), 10693–10705. PMLR.
  40. MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius. In International Conference on Learning Representations (ICLR).
  41. Scaling vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12104–12113.
  42. Black-box certification with randomized smoothing: A functional optimization based framework. Advances in Neural Information Processing Systems (NeurIPS), 33: 2316–2326.

Summary

We haven't generated a summary for this paper yet.