Mitigating the Curse of Dimensionality for Certified Robustness via Dual Randomized Smoothing (2404.09586v4)
Abstract: Randomized Smoothing (RS) has been proven a promising method for endowing an arbitrary image classifier with certified robustness. However, the substantial uncertainty inherent in the high-dimensional isotropic Gaussian noise imposes the curse of dimensionality on RS. Specifically, the upper bound of ${\ell_2}$ certified robustness radius provided by RS exhibits a diminishing trend with the expansion of the input dimension $d$, proportionally decreasing at a rate of $1/\sqrt{d}$. This paper explores the feasibility of providing ${\ell_2}$ certified robustness for high-dimensional input through the utilization of dual smoothing in the lower-dimensional space. The proposed Dual Randomized Smoothing (DRS) down-samples the input image into two sub-images and smooths the two sub-images in lower dimensions. Theoretically, we prove that DRS guarantees a tight ${\ell_2}$ certified robustness radius for the original input and reveal that DRS attains a superior upper bound on the ${\ell_2}$ robustness radius, which decreases proportionally at a rate of $(1/\sqrt m + 1/\sqrt n )$ with $m+n=d$. Extensive experiments demonstrate the generalizability and effectiveness of DRS, which exhibits a notable capability to integrate with established methodologies, yielding substantial improvements in both accuracy and ${\ell_2}$ certified robustness baselines of RS on the CIFAR-10 and ImageNet datasets. Code is available at https://github.com/xiasong0501/DRS.
- Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, 2013.
- Towards evaluating the robustness of neural networks. In IEEE symposium on security and privacy, 2017.
- (certified!!) adversarial robustness for free! In Proc. Int’l Conf. Learning Representations, 2023.
- Adversarial training of self-supervised monocular depth estimation against physical-world attacks. In Proc. Int’l Conf. Learning Representations, 2023.
- Certified adversarial robustness via randomized smoothing. In Proc. Int’l Conf. Machine Learning, 2019.
- Imagenet: A large-scale hierarchical image database. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2009.
- Mma training: Direct input space margin maximization through adversarial training. In Proc. Int’l Conf. Learning Representations, 2019.
- MeViS: A large-scale benchmark for video segmentation with motion expressions. In Proc. IEEE Int’l Conf. Computer Vision, 2023a.
- MOSE: A new dataset for video object segmentation in complex scenes. In Proc. IEEE Int’l Conf. Computer Vision, 2023b.
- VLT: Vision-language transformer and query generation for referring segmentation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2023c.
- Advdrop: Adversarial attack to dnns by dropping information. In Proc. IEEE Int’l Conf. Computer Vision, 2021.
- Gsmooth: Certified robustness against semantic transformations via generalized randomized smoothing. In Proc. Int’l Conf. Machine Learning, 2022.
- Deep residual learning for image recognition. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2016.
- Natural adversarial examples. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2021.
- Boosting randomized smoothing with variance reduced classifiers. In Proc. Int’l Conf. Learning Representations, 2022.
- Consistency regularization for certified robustness of smoothed classifiers. In Proc. Annual Conf. Neural Information Processing Systems, 2020.
- Gr-psn: Learning to estimate surface normal and reconstruct photometric stereo images. IEEE Trans. on Visualization and Computer Graphics, 2023.
- Certified defense for content based image retrieval. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 4561–4570, 2023.
- Reluplex: An efficient smt solver for verifying deep neural networks. In International conference on computer aided verification, 2017.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Detect and locate: Exposing face manipulation by semantic-and noise-level telltales. IEEE Trans. on Information Forensics and Security, 2022.
- Learning multiple layers of features from tiny images. Master’s thesis, University of Tront, 2009.
- Curse of dimensionality on randomized smoothing for certifiable robustness. In Proc. Int’l Conf. Machine Learning, 2020.
- Certified robustness to adversarial examples with differential privacy. In IEEE Symposium on Security and Privacy, 2019.
- Making substitute models more bayesian can enhance transferability of adversarial examples. In Proc. Int’l Conf. Learning Representations, 2023.
- An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351, 2017.
- Beyond the prior forgery knowledge: Mining critical clues for general face forgery detection. IEEE Trans. on Information Forensics and Security, 2023.
- Towards deep learning models resistant to adversarial attacks. In Proc. Int’l Conf. Learning Representations, 2018.
- Ix. on the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 1933.
- Improved denoising diffusion probabilistic models. In Proc. Int’l Conf. Machine Learning, 2021.
- Projected randomized smoothing for certified adversarial robustness. Transactions on Machine Learning Research, 2023.
- Certified defenses against adversarial examples. In Proc. Int’l Conf. Learning Representations, 2018.
- Provably robust deep learning via adversarially trained smoothed classifiers. In Proc. Annual Conf. Neural Information Processing Systems, 2019.
- Denoised smoothing: A provable defense for pretrained classifiers. Proc. Annual Conf. Neural Information Processing Systems, 2020.
- Adversarial training for free! In Proc. Annual Conf. Neural Information Processing Systems, 2019.
- Towards efficient and effective adversarial training. In Proc. Annual Conf. Neural Information Processing Systems, 2021.
- Intriguing properties of input-dependent randomized smoothing. In Proc. Int’l Conf. Machine Learning, 2022.
- Intriguing properties of neural networks. In Proc. Int’l Conf. Learning Representations, 2014.
- Evaluating robustness of neural networks with mixed integer programming. In Proc. Int’l Conf. Learning Representations, 2018.
- On adaptive attacks to adversarial example defenses. In Proc. Annual Conf. Neural Information Processing Systems, 2020.
- Efficient formal safety analysis of neural networks. In Proc. Annual Conf. Neural Information Processing Systems, 2018.
- Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proc. Int’l Conf. Machine Learning, 2018.
- Towards open vocabulary learning: A survey. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2024.
- Completing the picture: Randomized smoothing suffers from the curse of dimensionality for a large family of distributions. In International Conference on Artificial Intelligence and Statistics, 2021.
- Randomized smoothing of all shapes and sizes. In Proc. Int’l Conf. Learning Representations, 2020.
- Towards robust rain removal against adversarial attacks: A comprehensive benchmark analysis and beyond. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 6013–6022, 2022.
- Backdoor attacks against deep image compression via adaptive frequency trigger. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pp. 12250–12259, 2023.
- Meta gradient adversarial attack. In Proc. IEEE Int’l Conf. Computer Vision, 2021.
- Macer: Attack-free and scalable robust training via maximizing certified radius. In Proc. Int’l Conf. Learning Representations, 2019.