Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Evaluating the Adversarial Robustness of Semantic Segmentation Models (2306.14217v1)

Published 25 Jun 2023 in cs.CV and cs.LG

Abstract: Achieving robustness against adversarial input perturbation is an important and intriguing problem in machine learning. In the area of semantic image segmentation, a number of adversarial training approaches have been proposed as a defense against adversarial perturbation, but the methodology of evaluating the robustness of the models is still lacking, compared to image classification. Here, we demonstrate that, just like in image classification, it is important to evaluate the models over several different and hard attacks. We propose a set of gradient based iterative attacks and show that it is essential to perform a large number of iterations. We include attacks against the internal representations of the models as well. We apply two types of attacks: maximizing the error with a bounded perturbation, and minimizing the perturbation for a given level of error. Using this set of attacks, we show for the first time that a number of models in previous work that are claimed to be robust are in fact not robust at all. We then evaluate simple adversarial training algorithms that produce reasonably robust models even under our set of strong attacks. Our results indicate that a key design decision to achieve any robustness is to use only adversarial examples during training. However, this introduces a trade-off between robustness and accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks, 2023.
  2. On the robustness of semantic segmentation models to adversarial attacks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 888–897. Computer Vision Foundation / IEEE Computer Society, 2018.
  3. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 274–283. PMLR, 2018.
  4. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations (ICLR), 2015.
  5. On the robustness of redundant teacher-student frameworks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, Long Beach, CA, USA, June 16-20, 2019, pages 1380–1388. Computer Vision Foundation / IEEE, 2019.
  6. Robust semantic segmentation by redundant networks with a layer-specific loss contribution and majority vote. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020, pages 1348–1358. Computer Vision Foundation / IEEE, 2020.
  7. Foundations of Data Science. Cambridge University Press, July 2020.
  8. Zero-query transfer attacks on context-aware object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15024–15034, June 2022.
  9. On evaluating adversarial robustness. CoRR, abs/1902.06705, 2019.
  10. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, pages 39–57, 2017.
  11. Rays: A ray searching method for hard-label adversarial attack. In Rajesh Gupta, Yan Liu, Jiliang Tang, and B. Aditya Prakash, editors, KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 1739–1747. ACM, 2020.
  12. HopSkipJumpAttack: A query-efficient decision-based attack. In 2020 IEEE Symposium on Security and Privacy (SP), pages 1277–1294, Los Alamitos, CA, USA, May 2020. IEEE Computer Society.
  13. Rethinking atrous convolution for semantic image segmentation. CoRR, abs/1706.05587, 2017.
  14. Semantically stealthy adversarial attacks against segmentation models. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, pages 2846–2855. IEEE, 2022.
  15. Houdini: Fooling deep structured visual and speech recognition models with adversarial examples. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6977–6987, 2017.
  16. Certified adversarial robustness via randomized smoothing. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 1310–1320, 2019.
  17. The cityscapes dataset for semantic urban scene understanding. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 3213–3223. IEEE Computer Society, 2016.
  18. Robustbench: a standardized adversarial robustness benchmark. In Joaquin Vanschoren and Sai-Kit Yeung, editors, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021.
  19. Evaluating the adversarial robustness of adaptive test-time defenses. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 4421–4435. PMLR, 2022.
  20. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 2206–2216. PMLR, 2020.
  21. Adversarial robustness as a prior for learned representations. Technical Report 1906.00945, arXiv.org, 2019.
  22. Adversarial examples for semantic image segmentation. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net, 2017.
  23. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  24. Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations (ICLR), 2015.
  25. Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In Shai Avidan, Gabriel J. Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, editors, Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXIX, volume 13689 of Lecture Notes in Computer Science, pages 308–325. Springer, 2022.
  26. MLAttack: Fooling semantic segmentation networks by multi-layer attacks. In Gernot A. Fink, Simone Frintrop, and Xiaoyi Jiang, editors, Pattern Recognition - 41st DAGM German Conference, DAGM GCPR 2019, Dortmund, Germany, September 10-13, 2019, Proceedings, volume 11824 of Lecture Notes in Computer Science, pages 401–413. Springer, 2019.
  27. Adversarial examples are not bugs, they are features. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 125–136. Curran Associates, Inc., 2019.
  28. Improved noise and attack robustness for semantic segmentation by using multi-task training with self-supervised depth estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020, pages 1299–1309. Computer Vision Foundation / IEEE, 2020.
  29. Detecting adversarial perturbations in multi-task perception. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022, Kyoto, Japan, October 23-27, 2022, pages 13050–13057. IEEE, 2022.
  30. Adversarial examples in the physical world. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net, 2017.
  31. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3431–3440. IEEE Computer Society, 2015.
  32. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  33. Multitask learning strengthens adversarial robustness. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II, volume 12347 of Lecture Notes in Computer Science, pages 158–174. Springer, 2020.
  34. PDPGD: primal-dual proximal gradient descent adversarial attack. CoRR, abs/2106.01538, 2021.
  35. Universal adversarial perturbations against semantic image segmentation. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 2774–2783. IEEE Computer Society, 2017.
  36. Deepfool: A simple and accurate method to fool deep neural networks. In The IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 2574–2582, June 2016.
  37. Generalizable data-free objective for crafting universal adversarial perturbations. IEEE Trans. Pattern Anal. Mach. Intell., 41(10):2452–2465, 2019.
  38. Proximal splitting adversarial attacks for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20524–20533, 2023.
  39. Bob L. Sturm. A simple method to determine if a music information retrieval system is a ”horse”. IEEE Trans. Multim., 16(6):1636–1644, 2014.
  40. CAMA: class activation mapping disruptive attack for deep neural networks. Neurocomputing, 500:989–1002, 2022.
  41. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations (ICLR), 2014.
  42. Robustness may be at odds with accuracy. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
  43. Adversarial examples for semantic segmentation and object detection. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 1378–1387. IEEE Computer Society, 2017.
  44. 11 TOPS photonic convolutional accelerator for optical neural networks. Nature, 589(7840):44–51, Jan. 2021.
  45. Dynamic divide-and-conquer adversarial training for robust semantic segmentation. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 7466–7475. IEEE, 2021.
  46. Hengshuang Zhao. semseg. https://github.com/hszhao/semseg, 2019.
  47. Pyramid scene parsing network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6230–6239. IEEE Computer Society, 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Levente Halmosi (2 papers)
  2. Mark Jelasity (5 papers)
Citations (1)