Infrared Adversarial Car Stickers (2405.09924v1)
Abstract: Infrared physical adversarial examples are of great significance for studying the security of infrared AI systems that are widely used in our lives such as autonomous driving. Previous infrared physical attacks mainly focused on 2D infrared pedestrian detection which may not fully manifest its destructiveness to AI systems. In this work, we propose a physical attack method against infrared detectors based on 3D modeling, which is applied to a real car. The goal is to design a set of infrared adversarial stickers to make cars invisible to infrared detectors at various viewing angles, distances, and scenes. We build a 3D infrared car model with real infrared characteristics and propose an infrared adversarial pattern generation method based on 3D mesh shadow. We propose a 3D control points-based mesh smoothing algorithm and use a set of smoothness loss functions to enhance the smoothness of adversarial meshes and facilitate the sticker implementation. Besides, We designed the aluminum stickers and conducted physical experiments on two real Mercedes-Benz A200L cars. Our adversarial stickers hid the cars from Faster RCNN, an object detector, at various viewing angles, distances, and scenes. The attack success rate (ASR) was 91.49% for real cars. In comparison, the ASRs of random stickers and no sticker were only 6.21% and 0.66%, respectively. In addition, the ASRs of the designed stickers against six unseen object detectors such as YOLOv3 and Deformable DETR were between 73.35%-95.80%, showing good transferability of the attack performance across detectors.
- Cognitive data augmentation for adversarial defense via pixel masking. Pattern Recognition Letters, 146:244–251, 2021.
- Synthesizing robust adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, ICML, 2018.
- Cascade r-cnn: high quality object detection and instance segmentation. IEEE transactions on pattern analysis and machine intelligence, 43(5):1483–1498, 2019.
- Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, pages 39–57, 2017.
- MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
- A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 605–613, 2017.
- FLIR. Free flir thermal dataset for algorithm training. [EB/OL]. https://www.flir.com/oem/adas/adas-dataset-form/ Accessed Nov. 12, 2021.
- Explaining and harnessing adversarial examples. In Int. Conf. Learn. Represent., 2015.
- Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In European Conference on Computer Vision, pages 308–325. Springer, 2022.
- Countering adversarial images using input transformations. In Int. Conf. Learn. Represent., 2018.
- Adversarial texture for fooling person detectors in the physical world. In IEEE Conf. Comput. Vis. Pattern Recog., 2022.
- Physically realizable natural-looking clothing textures evade person detectors via 3d modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16975–16984, 2023.
- Universal physical camouflage attacks on object detectors. In IEEE Conf. Comput. Vis. Pattern Recog., 2020.
- Focal loss for dense object detection. In Int. Conf. Comput. Vis., 2017.
- Ssd: Single shot multibox detector. In Eur. Conf. Comput. Vis., 2016.
- Towards deep learning models resistant to adversarial attacks. In Int. Conf. Learn. Represent., 2018.
- TorchVision maintainers and contributors. Torchvision: Pytorch’s computer vision library. https://github.com/pytorch/vision, 2016.
- Laplacian mesh optimization. In Proceedings of the 4th international conference on Computer graphics and interactive techniques in Australasia and Southeast Asia, pages 381–389, 2006.
- Libra r-cnn: Towards balanced learning for object detection. In IEEE Conf. Comput. Vis. Pattern Recog., 2019.
- Accelerating 3d deep learning with pytorch3d. arXiv:2007.08501, 2020.
- Yolov3: An incremental improvement. CoRR, abs/1804.02767, 2018.
- Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell., 39(6), 2016.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
- One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput., 23(5):828–841, 2019.
- Dta: Physical camouflage attacks using differentiable transformation network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15305–15314, 2022.
- Intriguing properties of neural networks. In Int. Conf. Learn. Represent., 2014.
- Fooling automated surveillance cameras: Adversarial patches to attack person detection. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops, 2019.
- Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack. In Proceedings of the AAAI conference on artificial intelligence, pages 2414–2422, 2022.
- Dual attention suppression attack: Generate adversarial camouflage in physical world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8565–8574, 2021.
- Hotcold block: Fooling thermal infrared detectors with a novel wearable design. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 15233–15241, 2023a.
- Unified adversarial patch for cross-modal attacks in the physical world. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4445–4454, 2023b.
- Physically adversarial infrared patches with learnable shapes and locations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12334–12342, 2023c.
- Feature squeezing: Detecting adversarial examples in deep neural networks. In 25th Annual Network and Distributed System Security Symposium, NDSS, 2018.
- Camou: Learning a vehicle camouflage for physical adversarial attack on object detections in the wild. ICLR, 2019.
- Fooling thermal infrared pedestrian detectors in real world using small bulbs. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3616–3624, 2021a.
- Deformable DETR: deformable transformers for end-to-end object detection. In Int. Conf. Learn. Represent., 2021b.
- Infrared invisible clothing: Hiding from infrared detectors at multiple angles in real world. In IEEE Conf. Comput. Vis. Pattern Recog., 2022.
- Xiaopei Zhu (7 papers)
- Yuqiu Liu (4 papers)
- Zhanhao Hu (14 papers)
- Jianmin Li (43 papers)
- Xiaolin Hu (99 papers)