Towards Better Certified Segmentation via Diffusion Models (2306.09949v1)
Abstract: The robustness of image segmentation has been an important research topic in the past few years as segmentation models have reached production-level accuracy. However, like classification models, segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving. Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees. However, this method exhibits a trade-off between the amount of added noise and the level of certification achieved. In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models. Our experiments show that combining randomized smoothing and diffusion models significantly improves certified robustness, with results indicating a mean improvement of 21 points in accuracy compared to previous state-of-the-art methods on Pascal-Context and Cityscapes public datasets. Our method is independent of the selected segmentation model and does not need any additional specialized training procedure.
- Advocating for multiple defense strategies against adversarial examples. In ECML PKDD 2020 Workshops 2020, pages 165–177. Springer, 2020.
- On lipschitz regularization of convolutional layers using toeplitz matrix theory. Proceedings of the AAAI Conference on Artificial Intelligence, pages 6661–6669, 2021.
- A unified algebraic perspective on lipschitz neural networks. In The Eleventh International Conference on Learning Representations, 2023.
- On the robustness of semantic segmentation models to adversarial attacks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 888–897, 2018.
- Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pages 274–283. PMLR, 2018.
- Random smoothing might be unable to certify ℓ∞subscriptℓ\ell_{\infty}roman_ℓ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT robustness for high-dimensional images. The Journal of Machine Learning Research, 21(1):8726–8746, 2020.
- Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. Ieee, 2017.
- (certified!!) adversarial robustness for free! In The Eleventh International Conference on Learning Representations, 2023.
- Vision transformer adapter for dense predictions. In The Eleventh International Conference on Learning Representations, 2023.
- Certified adversarial robustness via randomized smoothing. In international conference on machine learning, pages 1310–1320. PMLR, 2019.
- The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
- Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pages 2206–2216. PMLR, 2020.
- Mind the box: l_1𝑙_1l\_1italic_l _ 1-apgd for sparse adversarial attacks on image classifiers. In International Conference on Machine Learning, pages 2201–2211. PMLR, 2021.
- Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021.
- Towards evading the limits of randomized smoothing: A theoretical analysis. arXiv preprint arXiv:2206.01715, 2022.
- The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111, 2015.
- Generalizable adversarial training via spectral normalization. In International Conference on Learning Representations, 2019.
- Scalable certified segmentation via randomized smoothing. In International Conference on Machine Learning, pages 3340–3351. PMLR, 2021.
- Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018.
- Jamie Hayes. Extensions and limitations of randomized smoothing for robustness guarantees. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 786–787, 2020.
- Non-local context encoder: Robust biomedical image segmentation against adversarial attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 2019.
- Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
- Sture Holm. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, pages 65–70, 1979.
- Training certifiably robust neural networks with efficient local lipschitz bounds. Advances in Neural Information Processing Systems, 34:22745–22757, 2021.
- Adversarial attacks for image segmentation on multiple lightweight models. IEEE Access, 8:31359–31370, 2020.
- Center smoothing: Certified robustness for networks with structured outputs. Advances in Neural Information Processing Systems, 34:5560–5575, 2021.
- Curse of dimensionality on randomized smoothing for certifiable robustness. In International Conference on Machine Learning, pages 5458–5467. PMLR, 2020.
- Adversarial examples in the physical world. In Artificial intelligence safety and security, pages 99–112. Chapman and Hall/CRC, 2018.
- Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pages 656–672. IEEE, 2019.
- Certified adversarial robustness with additive noise. Advances in neural information processing systems, 32, 2019a.
- Preventing gradient attenuation in lipschitz constrained convolutional networks. Advances in neural information processing systems, 32, 2019b.
- Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
- Towards robust vision transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12042–12051, June 2022.
- A dynamical system perspective for lipschitz neural networks. In International Conference on Machine Learning, pages 15484–15500. PMLR, 2022.
- Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
- Higher-order certification for randomized smoothing. Advances in Neural Information Processing Systems, 33:4501–4511, 2020.
- Hidden cost of randomized smoothing. In International Conference on Artificial Intelligence and Statistics, pages 4033–4041. PMLR, 2021.
- The role of context for object detection and semantic segmentation in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 891–898, 2014.
- Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162–8171. PMLR, 2021.
- Theoretical evidence for adversarial robustness through randomization. Advances in neural information processing systems, 32, 2019.
- Almost-orthogonal layers for efficient general-purpose lipschitz networks. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXI, pages 350–365. Springer, 2022.
- Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 32, 2019.
- Denoised smoothing: A provable defense for pretrained classifiers. Advances in Neural Information Processing Systems, 33:21945–21957, 2020.
- Skew orthogonal convolutions. In International Conference on Machine Learning, pages 9756–9766. PMLR, 2021.
- Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256–2265. PMLR, 2015.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Orthogonalizing convolutional layers with the cayley transform. In International Conference on Learning Representations, 2021.
- Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks. Advances in neural information processing systems, 31, 2018.
- Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43(10):3349–3364, 2020.
- Completing the picture: Randomized smoothing suffers from the curse of dimensionality for a large family of distributions. In International Conference on Artificial Intelligence and Statistics, pages 3763–3771. PMLR, 2021.
- Generating 3d adversarial point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9136–9144, 2019.
- Adversarial examples for semantic segmentation and object detection. In Proceedings of the IEEE international conference on computer vision, pages 1369–1378, 2017.
- LOT: Layer-wise orthogonal training on improving l2 certified robustness. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
- Randomized smoothing of all shapes and sizes. In International Conference on Machine Learning, pages 10693–10705. PMLR, 2020.
- Certified defences against adversarial patch attacks on semantic segmentation. In The Eleventh International Conference on Learning Representations, 2023.
- Constructing orthogonal convolutions in an explicit manner. In International Conference on Learning Representations, 2022.
- Othmane Laousy (3 papers)
- Alexandre Araujo (23 papers)
- Guillaume Chassagnon (7 papers)
- Marie-Pierre Revel (7 papers)
- Siddharth Garg (99 papers)
- Farshad Khorrami (73 papers)
- Maria Vakalopoulou (42 papers)