Dynamic Adversarial Attacks on Autonomous Driving Systems (2312.06701v3)
Abstract: This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems. Specifically, we manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mounted on another moving vehicle. These patches are optimized to deceive the object detection models into misclassifying targeted objects, e.g., traffic signs. Such manipulation has significant implications for critical multi-vehicle interactions such as intersection crossing and lane changing, which are vital for safe and efficient autonomous driving systems. Particularly, we make four major contributions. First, we introduce a novel adversarial attack approach where the patch is not co-located with its target, enabling more versatile and stealthy attacks. Moreover, our method utilizes dynamic patches displayed on a screen, allowing for adaptive changes and movement, enhancing the flexibility and performance of the attack. To do so, we design a Screen Image Transformation Network (SIT-Net), which simulates environmental effects on the displayed images, narrowing the gap between simulated and real-world scenarios. Further, we integrate a positional loss term into the adversarial training process to increase the success rate of the dynamic attack. Finally, we shift the focus from merely attacking perceptual systems to influencing the decision-making algorithms of self-driving systems. Our experiments demonstrate the first successful implementation of such dynamic adversarial attacks in real-world autonomous driving scenarios, paving the way for advancements in the field of robust and secure autonomous driving.
- Multimodal vehicle detection: fusing 3d-lidar and color camera data. Pattern Recognition Letters, 115:20–29, 2018. Multimodal Fusion for Pattern Recognition.
- Synthesizing robust adversarial examples. CoRR, abs/1707.07397, 2017.
- Adversarial patch. ArXiv, abs/1712.09665, 2017.
- Adversarial objects against lidar-based autonomous driving systems. CoRR, abs/1907.05418, 2019.
- Adversarial examples detection beyond image space. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3850–3854, 2021.
- Robust physical adversarial attack on faster R-CNN object detector. CoRR, abs/1804.05810, 2018.
- Physical adversarial attacks on an aerial imagery object detector. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 1796–1806, 2022.
- Robustness of classifiers: from adversarial to random noise. CoRR, abs/1608.08967, 2016.
- Deep dual-modal traffic objects instance segmentation method using camera and lidar data for autonomous driving. Remote Sensing, 12(20):3274, 2020.
- Brian P. Gerkey. amcl: Adaptive monte carlo localization. ROS Wiki, Accessed: 2023. Available: http://wiki.ros.org/amcl.
- Jacob Gildenblat and contributors. Pytorch library for cam methods. https://github.com/jacobgil/pytorch-grad-cam, 2021.
- Dynamic adversarial patch for evading object detection models. CoRR, abs/2010.13070, 2020.
- UPC: learning universal physical camouflage attacks on object detectors. CoRR, abs/1909.04326, 2019.
- Fooling the eyes of autonomous vehicles: Robust physical adversarial examples against traffic sign recognition systems. Internet Society, 2022.
- ultralytics/yolov5: v3.1 - Bug Fixes and Performance Improvements, 2020.
- On physical adversarial patches for object detection. 2019.
- Adversarial camera stickers: A physical camera-based attack on deep learning systems. In Proceedings of the 36th International Conference on Machine Learning, pages 3896–3904. PMLR, 2019.
- Slap: Improving physical adversarial examples with short-lived adversarial perturbations. In USENIX Security Symposium, 2020.
- Safetynet: Detecting and rejecting adversarial examples robustly. CoRR, abs/1704.00103, 2017a.
- NO need to worry about adversarial examples in object detection in autonomous vehicles. CoRR, abs/1707.03501, 2017b.
- Understanding deep image representations by inverting them. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5188–5196, 2015.
- Universal adversarial perturbations. CoRR, abs/1610.08401, 2016.
- Eigen-cam: Class activation map using principal components. In 2020 international joint conference on neural networks (IJCNN), pages 1–7. IEEE, 2020.
- Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. CoRR, abs/1412.1897, 2014.
- You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
- Faster R-CNN: towards real-time object detection with region proposal networks. CoRR, abs/1506.01497, 2015.
- Adversarial manipulation of deep representations, 2015.
- Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, page 1528–1540, New York, NY, USA, 2016. Association for Computing Machinery.
- Very deep convolutional networks for large-scale image recognition. 2014.
- Physical adversarial examples for object detectors. In 12th USENIX Workshop on Offensive Technologies (WOOT 18), Baltimore, MD, 2018. USENIX Association.
- Stereolabs. Zed 2 - ai stereo camera, 2023. Technical specifications and features of ZED 2 camera.
- Deep neural networks for object detection. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2013a.
- Intriguing properties of neural networks, 2013b.
- Fooling automated surveillance cameras: adversarial patches to attack person detection. CoRR, abs/1904.08653, 2019.
- Autonomous vehicle perception: The technology of today and tomorrow. Transportation Research Part C: Emerging Technologies, 89:384–406, 2018.
- Dual attention suppression attack: Generate adversarial camouflage in physical world. CoRR, abs/2103.01050, 2021.
- Does physical adversarial example really matter to autonomous driving? towards system-level effect of adversarial object evasion attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4412–4423, 2023.
- Pi-rcnn: An efficient multi-sensor 3d object detector with point-based attentive cont-conv fusion module. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):12460–12467, 2020.
- Structured adversarial attack: Towards general implementation and better interpretability. In International Conference on Learning Representations, 2019.
- Adversarial t-shirt! evading person detectors in a physical world. In Computer Vision – ECCV 2020, pages 665–681, Cham, 2020. Springer International Publishing.
- Yahboom Technology. Rosmaster r2 study material. Yahboom Technology, Accessed: 2023. Available: http://www.yahboom.net/study/ROSMASTER-R2.
- Admm attack: an enhanced adversarial attack for deep neural networks with undetectable distortions. In Proceedings of the 24th Asia and South Pacific Design Automation Conference, pages 499–505, 2019a.
- Seeing isn’t believing: Towards more robust adversarial attack against real world object detectors. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, page 1989–2004, New York, NY, USA, 2019b. Association for Computing Machinery.
- Amirhosein Chahe (3 papers)
- Chenan Wang (8 papers)
- Abhishek Jeyapratap (1 paper)
- Kaidi Xu (85 papers)
- Lifeng Zhou (52 papers)