TPatch: A Triggered Physical Adversarial Patch (2401.00148v1)
Abstract: Autonomous vehicles increasingly utilize the vision-based perception module to acquire information about driving environments and detect obstacles. Correct detection and classification are important to ensure safe driving decisions. Existing works have demonstrated the feasibility of fooling the perception models such as object detectors and image classifiers with printed adversarial patches. However, most of them are indiscriminately offensive to every passing autonomous vehicle. In this paper, we propose TPatch, a physical adversarial patch triggered by acoustic signals. Unlike other adversarial patches, TPatch remains benign under normal circumstances but can be triggered to launch a hiding, creating or altering attack by a designed distortion introduced by signal injection attacks towards cameras. To avoid the suspicion of human drivers and make the attack practical and robust in the real world, we propose a content-based camouflage method and an attack robustness enhancement method to strengthen it. Evaluations with three object detectors, YOLO V3/V5 and Faster R-CNN, and eight image classifiers demonstrate the effectiveness of TPatch in both the simulation and the real world. We also discuss possible defenses at the sensor, algorithm, and system levels.
- Proposal for a standard default color space for the internet—srgb. In Color and imaging conference, number 1, pages 238–245. Society for Imaging Science and Technology, 1996.
- ApolloAuto. Apolloauto/apollo: An open autonomous driving platform.
- Synthesizing robust adversarial examples. In Proceedings of ICML 2018, pages 284–293. PMLR, 2018.
- Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. In Proceedings of ICLR 2018, 2018.
- Adversarial patch. arXiv preprint arXiv:1712.09665, 2017.
- Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In Proceedings of IEEE SP 2021, pages 176–194. IEEE, 2021.
- Towards evaluating the robustness of neural networks. In Proceedings of IEEE SP 2017, pages 39–57. IEEE, 2017.
- Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Proceedings of ECML PKDD 2018, pages 52–68. Springer, 2018.
- Sentinet: Detecting localized universal attacks against deep learning systems. In Proceedings of IEEE SPW 2020, pages 48–54. IEEE, 2020.
- Common Objects in Context Dataset, 2018. https://cocodataset.org/.
- Imagenet: A large-scale hierarchical image database. In Proceedings of CVPR 2009, pages 248–255. Ieee, 2009.
- Boosting adversarial attacks with momentum. In Proceedings of CVPR 2018, pages 9185–9193, 2018.
- A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853, 2016.
- Removing camera shake from a single photograph. In ACM SIGGRAPH 2006 Papers, pages 787–794. 2006.
- Image style transfer using convolutional neural networks. In Proceedings of CVPR 2016, pages 2414–2423, 2016.
- Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of CVPR 2012, pages 3354–3361. IEEE, 2012.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Deep residual learning for image recognition. In Proceedings of CVPR 2016, pages 770–778, 2016.
- Detection of traffic signs in real-world images: The german traffic sign detection benchmark. In Proceedings of IJCNN 2013, pages 1–8. Ieee, 2013.
- Poltergeist: Acoustic adversarial machine learning against cameras and computer vision. In Proceedings of IEEE SP 2021, 2021.
- Perceptual losses for real-time style transfer and super-resolution. In Proceedings of ECCV 2016, pages 694–711. Springer, 2016.
- Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
- {{\{{SLAP}}\}}: Improving physical adversarial examples with short-lived adversarial perturbations. In Proceedings of {normal-{\{{USENIX}normal-}\}} Security 2021, 2021.
- Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- {{\{{GhostImage}}\}}: Remote perception attacks against camera-based image classification systems. In Proceedings of RAID 2020, pages 317–332, 2020.
- MarketsandMarkets. Automotive camera market by application, view type, technology, level of autonomy, vehicle & class, electric vehicle and region - global forecast to 2025.
- Magnet: a two-pronged defense against adversarial examples. In Proceedings of ACM CCS 2017, pages 135–147, 2017.
- Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
- Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell., 39(6):1137–1149, 2016.
- Dirty road can attack: Security of deep learning based automated lane centering under {{\{{Physical-World}}\}} attack. In Proceedings of USENIX Security 2021, pages 3309–3326, 2021.
- Invisible perturbations: Physical adversarial examples exploiting the rolling shutter effect. In Proceedings of CVPR 2021, 2021.
- Geary K Schwemmer. Doppler shift compensation system for laser transmitters and receivers, February 2 1993. US Patent 5,184,241.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of CVPR 2017, pages 618–626, 2017.
- Sok: On the semantic ai security in autonomous driving. arXiv preprint arXiv:2203.05314, 2022.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Physical adversarial examples for object detectors. In Proceedings of {normal-{\{{WOOT}normal-}\}} 2018, 2018.
- Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of AAAI 2017, 2017.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Walnut: Waging doubt on the integrity of mems accelerometers with acoustic injection attacks. In Proceedings of IEEE EuroS&P 2017, pages 3–18. IEEE, 2017.
- Ultralytics. Yolo v5.
- Patchguard: A provably robust defense against adversarial patches via small receptive fields and masking. In Proceedings of {normal-{\{{USENIX}normal-}\}} Security 2021, 2021.
- Adversarial examples for semantic segmentation and object detection. In Proceedings of ICCV 2017, pages 1369–1378, 2017.
- Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017.
- Can you trust autonomous vehicles: Contactless attacks against sensors of self-driving vehicle. Def Con, 24(8):109, 2016.
- Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of CVPR 2020, pages 2636–2645, 2020.
- Defending against whitebox adversarial attacks via randomized discretization. In Proceedings of AISTATS 2019, pages 684–693. PMLR, 2019.
- Seeing isn’t believing: Towards more robust adversarial attack against real world object detectors. In Proceedings of ACM CCS 2019, pages 1989–2004, 2019.