Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models (2306.12941v2)

Published 22 Jun 2023 in cs.CV and cs.LG

Abstract: Adversarial robustness has been studied extensively in image classification, especially for the $\ell_\infty$-threat model, but significantly less so for related tasks such as object detection and semantic segmentation, where attacks turn out to be a much harder optimization problem than for image classification. We propose several problem-specific novel attacks minimizing different metrics in accuracy and mIoU. The ensemble of our attacks, SEA, shows that existing attacks severely overestimate the robustness of semantic segmentation models. Surprisingly, existing attempts of adversarial training for semantic segmentation models turn out to be weak or even completely non-robust. We investigate why previous adaptations of adversarial training to semantic segmentation failed and show how recently proposed robust ImageNet backbones can be used to obtain adversarially robust semantic segmentation models with up to six times less training time for PASCAL-VOC and the more challenging ADE20k. The associated code and robust models are available at https://github.com/nmndeep/robust-segmentation

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Cospgd: a unified white-box adversarial attack for pixel-wise prediction tasks. arXiv preprint arXiv:2302.02213, 2023.
  2. On the robustness of semantic segmentation models to adversarial attacks. In CVPR, 2018.
  3. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML, 2018.
  4. Are transformers more robust than CNNs? In NeurIPS, 2021.
  5. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021.
  6. J Eric Bickel. Some comparisons among quadratic, spherical, and logarithmic scoring rules. Decision Analysis, 4(2):49–65, 2007.
  7. Evasion attacks against machine learning at test time. In ECML/PKKD, 2013.
  8. Adversarial patch. In NeurIPS 2017 Workshop on Machine Learning and Computer Security, 2017.
  9. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, 2017.
  10. Ead: Elastic-net attacks to deep neural networks via adversarial examples. In AAAI, 2018.
  11. Dapas: Denoising autoencoder to prevent adversarial attack in semantic segmentation. In IJCNN. IEEE, 2020.
  12. Houdini: Fooling deep structured prediction models. arXiv preprint arXiv:1707.05373, 2017.
  13. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML, 2020.
  14. Mind the box: l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-apgd for sparse adversarial attacks on image classifiers. In ICML, 2021.
  15. Sparse-rs: a versatile framework for query-efficient sparse black-box adversarial attacks. In AAAI, 2022.
  16. Edoardo Debenedetti. Adversarially robust vision transformers. Master’s thesis, Swiss Federal Institute of Technology, Lausanne (EPFL), 2022.
  17. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  18. The pascal visual object classes (voc) challenge. International journal of computer vision, 88:303–338, 2010.
  19. Explaining and harnessing adversarial examples. In ICLR, 2015.
  20. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435, 2016.
  21. Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In ECCV, 2022.
  22. Semantic contours from inverse detectors. In ICCV, 2011.
  23. Universal adversarial perturbations against semantic image segmentation. In ICCV, 2017.
  24. Is BERT really robust? natural language attack on text classification and entailment. In AAAI, 2019.
  25. Adversarial attacks for image segmentation on multiple lightweight models. IEEE Access, 8:31359–31370, 2020.
  26. From a fourier-domain perspective on adversarial examples to a wiener filter defense for semantic segmentation. In IJCNN, 2021.
  27. Perceptual adversarial robustness: Defense against unseen threat models. In ICLR, 2021.
  28. A comprehensive study on robustness of image classification models: Benchmarking and rethinking. arXiv preprint, arXiv:2302.14301, 2023.
  29. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021.
  30. A convnet for the 2020s. CVPR, 2022.
  31. Decoupled weight decay regularization. In ICLR, 2019.
  32. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
  33. Generalizable data-free objective for crafting universal adversarial perturbations. IEEE transactions on pattern analysis and machine intelligence, 41(10):2452–2465, 2018.
  34. Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2280–2289, 2022.
  35. Fixing data augmentation to improve adversarial robustness. arXiv preprint arXiv:2103.01946, 2021.
  36. Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses. In CVPR, 2019.
  37. Advspade: Realistic unrestricted attacks for semantic segmentation. arXiv preprint arXiv:1910.02354, 2019.
  38. Revisiting adversarial training for imagenet: Architectures, training and generalization across threat models. arXiv preprint arXiv:2303.01870, 2023.
  39. Segmenter: Transformer for semantic segmentation. In CVPR, 2021.
  40. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, 2017.
  41. Intriguing properties of neural networks. In ICLR, 2014.
  42. On adaptive attacks to adversarial example defenses. In NeurIPS, 2020.
  43. Better diffusion models further improve adversarial training. arXiv preprint arXiv:2302.04638, 2023.
  44. Wasserstein adversarial examples via projected sinkhorn iterations. In ICML, 2019.
  45. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In ECCV, 2018a.
  46. Unified perceptual parsing for scene understanding. In ECCV, 2018b.
  47. Adversarial examples for semantic segmentation and object detection. In ICCV, 2017.
  48. Dynamic divide-and-conquer adversarial training for robust semantic segmentation. In ICCV, 2021.
  49. Pyramid scene parsing network. In CVPR, 2017.
  50. Semantic understanding of scenes through the ade20k dataset. IJCV, 2019.
Citations (7)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com