Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The race to robustness: exploiting fragile models for urban camouflage and the imperative for machine learning security (2306.14609v1)

Published 26 Jun 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Adversarial Machine Learning (AML) represents the ability to disrupt Machine Learning (ML) algorithms through a range of methods that broadly exploit the architecture of deep learning optimisation. This paper presents Distributed Adversarial Regions (DAR), a novel method that implements distributed instantiations of computer vision-based AML attack methods that may be used to disguise objects from image recognition in both white and black box settings. We consider the context of object detection models used in urban environments, and benchmark the MobileNetV2, NasNetMobile and DenseNet169 models against a subset of relevant images from the ImageNet dataset. We evaluate optimal parameters (size, number and perturbation method), and compare to state-of-the-art AML techniques that perturb the entire image. We find that DARs can cause a reduction in confidence of 40.4% on average, but with the benefit of not requiring the entire image, or the focal object, to be perturbed. The DAR method is a deliberately simple approach where the intention is to highlight how an adversary with very little skill could attack models that may already be productionised, and to emphasise the fragility of foundational object detection models. We present this as a contribution to the field of ML security as well as AML. This paper contributes a novel adversarial method, an original comparison between DARs and other AML methods, and frames it in a new context - that of urban camouflage and the necessity for ML security and model robustness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Izzat Alsmadi. Adversarial Machine Learning, Research Trends and Applications. In Youssef Baddi, Youssef Gahi, Yassine Maleh, Mamoun Alazab, and Loai Tawalbeh, editors, Big Data Intelligence for Smart Applications, Studies in Computational Intelligence, pages 27–55. Springer International Publishing, Cham, 2022.
  2. Nicolas Papernot. A Marauder’s Map of Security and Privacy in Machine Learning, November 2018. arXiv:1811.01134 [cs].
  3. Intriguing properties of neural networks, February 2014. arXiv:1312.6199 [cs].
  4. Explaining and Harnessing Adversarial Examples. arXiv:1412.6572 [cs, stat], March 2015. arXiv: 1412.6572.
  5. Machine Learning in Computer Vision: A Review. EAI Endorsed Transactions on Scalable Information Systems, 8(32):e4–e4, April 2021.
  6. Robust Physical-World Attacks on Deep Learning Visual Classification. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1625–1634, Salt Lake City, UT, USA, June 2018. IEEE.
  7. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning, pages 274–283. PMLR, July 2018. ISSN: 2640-3498.
  8. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pages 1528–1540, New York, NY, USA, October 2016. Association for Computing Machinery.
  9. A Review of Adversarial Attack and Defense for Classification Methods. The American Statistician, 76(4):329–345, October 2022. Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/00031305.2021.2006781.
  10. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, December 2018.
  11. Adversarial Patch. arXiv:1712.09665 [cs], May 2018.
  12. C Wise and J Plested. Developing Imperceptible Adversarial Patches to Camouflage Military Assets From Computer Vision Enabled Technologies, May 2022. arXiv:2202.08892 [cs].
  13. ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches, March 2022. arXiv:2203.04412 [cs].
  14. Adversarial Robustness Toolbox (ART) v1.8, September 2021. original-date: 2018-03-15T14:40:43Z.
  15. CleverHans (latest release: v4.0.0), August 2021. original-date: 2016-09-15T00:28:04Z.
  16. Universal Adversarial Perturbations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 86–94, July 2017. ISSN: 1063-6919.
  17. Evasion Attacks against Machine Learning at Test Time. In Hendrik Blockeel, Kristian Kersting, Siegfried Nijssen, and Filip Železný, editors, Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science, pages 387–402, Berlin, Heidelberg, 2013. Springer.
  18. One Pixel Attack for Fooling Deep Neural Networks. IEEE Transactions on Evolutionary Computation, 23(5):828–841, October 2019. Conference Name: IEEE Transactions on Evolutionary Computation.
  19. Keras Team. Keras documentation: Keras Applications, 2023.
  20. Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models. August 2018.
  21. A survey on adversarial attacks and defences - Chakraborty - 2021 - CAAI Transactions on Intelligence Technology - Wiley Online Library.
  22. Benchmarking Adversarial Attacks and Defenses for Time-Series Data. In Haiqin Yang, Kitsuchart Pasupa, Andrew Chi-Sing Leung, James T. Kwok, Jonathan H. Chan, and Irwin King, editors, Neural Information Processing, Lecture Notes in Computer Science, pages 544–554, Cham, 2020. Springer International Publishing.
  23. Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It.
  24. Adversarial Machine Learning – Industry Perspectives. arXiv:2002.05646 [cs, stat], March 2021. arXiv: 2002.05646.
  25. Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures. arXiv:1610.07997 [cs], October 2016. arXiv: 1610.07997.

Summary

We haven't generated a summary for this paper yet.