Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Adversarial Attacks on Autonomous Driving Systems (2312.06701v3)

Published 10 Dec 2023 in cs.RO, cs.CV, and cs.LG

Abstract: This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems. Specifically, we manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mounted on another moving vehicle. These patches are optimized to deceive the object detection models into misclassifying targeted objects, e.g., traffic signs. Such manipulation has significant implications for critical multi-vehicle interactions such as intersection crossing and lane changing, which are vital for safe and efficient autonomous driving systems. Particularly, we make four major contributions. First, we introduce a novel adversarial attack approach where the patch is not co-located with its target, enabling more versatile and stealthy attacks. Moreover, our method utilizes dynamic patches displayed on a screen, allowing for adaptive changes and movement, enhancing the flexibility and performance of the attack. To do so, we design a Screen Image Transformation Network (SIT-Net), which simulates environmental effects on the displayed images, narrowing the gap between simulated and real-world scenarios. Further, we integrate a positional loss term into the adversarial training process to increase the success rate of the dynamic attack. Finally, we shift the focus from merely attacking perceptual systems to influencing the decision-making algorithms of self-driving systems. Our experiments demonstrate the first successful implementation of such dynamic adversarial attacks in real-world autonomous driving scenarios, paving the way for advancements in the field of robust and secure autonomous driving.

Introduction

In the field of autonomous driving, the reliability of decision-making systems is paramount for safety. One key aspect of these systems is their ability to accurately recognize and respond to objects such as traffic signs. However, the resilience of these systems to adversarial attacks, where the input data is manipulated to deceive the system, remains a concern. This paper presents a novel approach to challenge the robustness of autonomous driving systems by introducing dynamic adversarial patches aimed at the object detection models that these systems rely on.

Attack Framework

The paper introduces a method to generate dynamic adversarial patches that mislead the object detection algorithms of autonomous vehicles. The patches are displayed on a screen mounted on another vehicle, and they adapt to environmental changes and the relative positions of the vehicles involved. A Screen Image Transformation Network (SIT-Net) is designed to account for environmental effects on the displayed images, helping bridge the gap between simulated and real-world scenarios. Adding to this, a positional loss term is included in the adversarial training process to improve the attack's chances of success. The significance of these manipulations can impact critical driving decisions, such as intersection crossing and lane changes.

Methodology

The paper employs gradient descent optimization to find patches that cause the misclassification of objects, like a non-restrictive traffic sign being incorrectly identified as a restrictive one. A specialized model called SIT-Net is introduced to adapt the patch to environmental conditions, and the patches themselves are optimized for different distances from the observer vehicle. By incorporating the positions of interacting cars and environmental factors, the attack strategy is more dynamic and reflects real-world driving scenarios.

Experimental Insights

The paper reports the first successful implementation of this kind of dynamic adversarial attack against real-world autonomous driving systems. Experiments demonstrate higher success rates for the attacks with the dynamic patches in many cases, especially when the attacking vehicle is closer to the targeted vehicle, which is particularly relevant in practical driving situations. Enhancements to the adversarial training framework are anticipated to inform future advancements in creating adamantine autonomous driving systems.

Through their research, the potential risks and vulnerabilities of deep neural network-based object detectors in autonomous vehicles are highlighted. This emphasizes the need for the development of robust autonomous driving systems that can withstand adversarial conditions and ensure the continuous safety of its operations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Multimodal vehicle detection: fusing 3d-lidar and color camera data. Pattern Recognition Letters, 115:20–29, 2018. Multimodal Fusion for Pattern Recognition.
  2. Synthesizing robust adversarial examples. CoRR, abs/1707.07397, 2017.
  3. Adversarial patch. ArXiv, abs/1712.09665, 2017.
  4. Adversarial objects against lidar-based autonomous driving systems. CoRR, abs/1907.05418, 2019.
  5. Adversarial examples detection beyond image space. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3850–3854, 2021.
  6. Robust physical adversarial attack on faster R-CNN object detector. CoRR, abs/1804.05810, 2018.
  7. Physical adversarial attacks on an aerial imagery object detector. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 1796–1806, 2022.
  8. Robustness of classifiers: from adversarial to random noise. CoRR, abs/1608.08967, 2016.
  9. Deep dual-modal traffic objects instance segmentation method using camera and lidar data for autonomous driving. Remote Sensing, 12(20):3274, 2020.
  10. Brian P. Gerkey. amcl: Adaptive monte carlo localization. ROS Wiki, Accessed: 2023. Available: http://wiki.ros.org/amcl.
  11. Jacob Gildenblat and contributors. Pytorch library for cam methods. https://github.com/jacobgil/pytorch-grad-cam, 2021.
  12. Dynamic adversarial patch for evading object detection models. CoRR, abs/2010.13070, 2020.
  13. UPC: learning universal physical camouflage attacks on object detectors. CoRR, abs/1909.04326, 2019.
  14. Fooling the eyes of autonomous vehicles: Robust physical adversarial examples against traffic sign recognition systems. Internet Society, 2022.
  15. ultralytics/yolov5: v3.1 - Bug Fixes and Performance Improvements, 2020.
  16. On physical adversarial patches for object detection. 2019.
  17. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In Proceedings of the 36th International Conference on Machine Learning, pages 3896–3904. PMLR, 2019.
  18. Slap: Improving physical adversarial examples with short-lived adversarial perturbations. In USENIX Security Symposium, 2020.
  19. Safetynet: Detecting and rejecting adversarial examples robustly. CoRR, abs/1704.00103, 2017a.
  20. NO need to worry about adversarial examples in object detection in autonomous vehicles. CoRR, abs/1707.03501, 2017b.
  21. Understanding deep image representations by inverting them. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5188–5196, 2015.
  22. Universal adversarial perturbations. CoRR, abs/1610.08401, 2016.
  23. Eigen-cam: Class activation map using principal components. In 2020 international joint conference on neural networks (IJCNN), pages 1–7. IEEE, 2020.
  24. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. CoRR, abs/1412.1897, 2014.
  25. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
  26. Faster R-CNN: towards real-time object detection with region proposal networks. CoRR, abs/1506.01497, 2015.
  27. Adversarial manipulation of deep representations, 2015.
  28. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, page 1528–1540, New York, NY, USA, 2016. Association for Computing Machinery.
  29. Very deep convolutional networks for large-scale image recognition. 2014.
  30. Physical adversarial examples for object detectors. In 12th USENIX Workshop on Offensive Technologies (WOOT 18), Baltimore, MD, 2018. USENIX Association.
  31. Stereolabs. Zed 2 - ai stereo camera, 2023. Technical specifications and features of ZED 2 camera.
  32. Deep neural networks for object detection. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2013a.
  33. Intriguing properties of neural networks, 2013b.
  34. Fooling automated surveillance cameras: adversarial patches to attack person detection. CoRR, abs/1904.08653, 2019.
  35. Autonomous vehicle perception: The technology of today and tomorrow. Transportation Research Part C: Emerging Technologies, 89:384–406, 2018.
  36. Dual attention suppression attack: Generate adversarial camouflage in physical world. CoRR, abs/2103.01050, 2021.
  37. Does physical adversarial example really matter to autonomous driving? towards system-level effect of adversarial object evasion attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4412–4423, 2023.
  38. Pi-rcnn: An efficient multi-sensor 3d object detector with point-based attentive cont-conv fusion module. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):12460–12467, 2020.
  39. Structured adversarial attack: Towards general implementation and better interpretability. In International Conference on Learning Representations, 2019.
  40. Adversarial t-shirt! evading person detectors in a physical world. In Computer Vision – ECCV 2020, pages 665–681, Cham, 2020. Springer International Publishing.
  41. Yahboom Technology. Rosmaster r2 study material. Yahboom Technology, Accessed: 2023. Available: http://www.yahboom.net/study/ROSMASTER-R2.
  42. Admm attack: an enhanced adversarial attack for deep neural networks with undetectable distortions. In Proceedings of the 24th Asia and South Pacific Design Automation Conference, pages 499–505, 2019a.
  43. Seeing isn’t believing: Towards more robust adversarial attack against real world object detectors. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, page 1989–2004, New York, NY, USA, 2019b. Association for Computing Machinery.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Amirhosein Chahe (3 papers)
  2. Chenan Wang (8 papers)
  3. Abhishek Jeyapratap (1 paper)
  4. Kaidi Xu (85 papers)
  5. Lifeng Zhou (52 papers)
Citations (2)
Youtube Logo Streamline Icon: https://streamlinehq.com