Invisible Reflections: Leveraging Infrared Laser Reflections to Target Traffic Sign Perception (2401.03582v1)
Abstract: All vehicles must follow the rules that govern traffic behavior, regardless of whether the vehicles are human-driven or Connected Autonomous Vehicles (CAVs). Road signs indicate locally active rules, such as speed limits and requirements to yield or stop. Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation, resulting in potential safety issues. Humans can see and potentially defend against these attacks. But humans can not detect what they can not observe. We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors and the properties of Infrared Laser Reflections (ILRs), which are invisible to humans. The attack is designed to affect CAV cameras and perception, undermining traffic sign recognition by inducing misclassification. In this work, we formulate the threat model and requirements for an ILR-based traffic sign perception attack to succeed. We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras. Our black-box optimization methodology allows the attack to achieve up to a 100% attack success rate in indoor, static scenarios and a >80.5% attack success rate in our outdoor, moving vehicle scenarios. We find the latest state-of-the-art certifiable defense is ineffective against ILR attacks as it mis-certifies >33.5% of cases. To address this, we propose a detection strategy based on the physical properties of IR laser reflections which can detect 96% of ILR attacks.
- K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, F. Tramer, A. Prakash, T. Kohno, and D. Song, “Physical Adversarial Examples for Object Detectors,” in Workshop on Offensive Technologies (WOOT), 2018.
- Y. Zhao, H. Zhu, R. Liang, Q. Shen, S. Zhang, and K. Chen, “Seeing isn’t Believing: Practical Adversarial Attack Against Object Detectors,” in ACM SIGSAC Conference on Computer and Communications Security (CCS), 2019, p. 1989–2004.
- G. Lovisotto, H. Turner, I. Sluganovic, M. Strohmeier, and I. Martinovic, “SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations,” in USENIX Security Symposium, 2021, pp. 1865–1882.
- W. Jia, Z. Lu, H. Zhang, Z. Liu, J. Wang, and G. Qu, “Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems,” in 29th Annual Network and Distributed System Security Symposium (NDSS), 2022.
- Y. Zhong, X. Liu, D. Zhai, J. Jiang, and X. Ji, “Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 15 345–15 354.
- R. Duan, X. Mao, A. K. Qin, Y. Chen, S. Ye, Y. He, and Y. Yang, “Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 16 062–16 071.
- B. Nassi, Y. Mirsky, D. Nassi, R. Ben-Netanel, O. Drokin, and Y. Elovici, “Phantom of the adas: Securing advanced driver-assistance systems from split-second phantom attacks,” in 2020 ACM SIGSAC Conference on Computer and Communications Security (CCS), 2020, pp. 293–308.
- L. Yufeng, Y. Fengyu, L. Qi, L. Jiangtao, and C. Chenhong, “Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light,” Computers & Security, p. 103345, 2023.
- Tesla, Inc., “Tesla Model 3 Owner’s Manual,” https://www.tesla.com/sites/default/files/model_3_owners_manual_north_america_en.pdf, 2020.
- Tesla Inc., “Tesla Autopilot,” https://www.tesla.com/autopilot, 2020.
- MobilEye., “About MobilEye.” https://www.mobileye.com/about/, 2020.
- W. Wang, Y. Yao, X. Liu, X. Li, P. Hao, and T. Zhu, “I Can See the Light: Attacks on Autonomous Vehicles Using Invisible Lights,” in ACM SIGSAC Conference on Computer and Communications Security (CCS), 2021, pp. 1930–1944.
- Z. Zhou, D. Tang, X. Wang, W. Han, X. Liu, and K. Zhang, “Invisible Mask: Practical Attacks on Face Recognition with Infrared,” arXiv preprint arXiv:1803.04683, 2018.
- C. Yan, Z. Xu, Z. Yin, X. Ji, and W. Xu, “Rolling Colors: Adversarial Laser Exploits against Traffic Light Recognition,” in 31st USENIX Security Symposium, 2022, pp. 1957–1974.
- Z. Jin, X. Ji, Y. Cheng, B. Yang, C. Yan, and W. Xu, “PLA-LiDAR: Physical Laser Attacks against Lidar-based 3D Object Detection in Autonomous Vehicle,” in 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 1822–1839.
- C. Xiang, S. Mahloujifar, and P. Mittal, “PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier,” in Workshop on Offensive Technologies (WOOT), 2022.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A Large-Scale Hierarchical Image Database,” in IEEE conference on computer vision and pattern recognition (CVPR). IEEE, 2009, pp. 248–255.
- A. Krizhevsky, G. Hinton et al., “Learning Multiple Layers of Features from Tiny Images,” 2009.
- S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing, and C. Igel, “Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark,” in International Joint Conference on Neural Networks (IJCNN), no. 1288, 2013.
- C. Ertler, J. Mislej, T. Ollmann, L. Porzi, G. Neuhold, and Y. Kuang, “The Mapillary Traffic Sign Dataset for Detection and Classification on a Global Scale,” in European Conference on Computer Vision (ECCV), 2020, pp. 68–84.
- comma.ai, “OpenPilot: Open Source Driving Agent,” https://github.com/commaai/openpilot, 2023.
- J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” in IEEE conference on computer vision and pattern recognition (CVPR), 2017.
- Y.-C. Chiu, H.-Y. Lin, and W.-L. Tai, “A Two-Stage Learning Approach for Traffic Sign Detection and Recognition,” in 7th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2021), 2021, pp. 276–283.
- S.-T. Chen, C. Cornelius, J. Martin, and D. H. P. Chau, “Shapeshifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2018, pp. 52–68.
- R. Araneta, “Seeing the unseen: How infrared cameras capture beyond the visible,” https://possibility.teledyneimaging.com/seeing-the-unseen-how-infrared-cameras-capture-beyond-the-visible/, 2019.
- G. Ahearn, “Cameras that See Beyond Visible Light: Inspecting the Seen and Unseen,” https://www.qualitymag.com/articles/96211, 2020.
- R. Thakur, “Infrared sensors for autonomous vehicles,” in Recent Development in Optoelectronic Devices. Rijeka: IntechOpen, 2017, ch. 5.
- B. E. Bayer, “Color imaging array,” Jul. 20 1976, uS Patent 3,971,065.
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing Properties of Neural Networks,” in 2nd International Conference on Learning Representations (ICLR), 2014.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds.
- A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security, 2018, pp. 99–112.
- M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition,” in ACM SIGSAC Conference on Computer and Communications Security (CCS), 2016, pp. 1528–1540.
- A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing Robust Adversarial Examples,” in International Conference on Machine Learning (ICML), 2018.
- T. Brown, D. Mane, A. Roy, M. Abadi, and J. Gilmer, “Adversarial Patch,” arXiv preprint arXiv:1712.09665, 2017.
- Z. Zhong, W. Xu, Y. Jia, and T. Wei, “Perception Deception: Physical Adversarial Attack Challenges and Tactics for DNN-Based Object Detection,” in Black Hat Europe, 2018.
- K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated Whitebox Testing of Deep Learning Systems,” in Symposium on Operating Systems Principles, 2017, pp. 1–18.
- Y. Tian, K. Pei, S. Jana, and B. Ray, “Deeptest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars,” in International Conference on Software Engineering (ICSE), 2018, pp. 303–314.
- A. Chernikova, A. Oprea, C. Nita-Rotaru, and B. Kim, “Are Self-Driving Cars Secure? Evasion Attacks Against Deep Neural Networks for Steering Angle Prediction,” in 2019 IEEE Security and Privacy Workshops (SPW), 2019, pp. 132–137.
- H. Zhou, W. Li, Y. Zhu, Y. Zhang, B. Yu, L. Zhang, and C. Liu, “Deepbillboard: Systematic Physical-World Testing of Autonomous Driving Systems,” in International Conference on Software Engineering (ICSE), 2020.
- Y. Li, F. Yang, Q. Liu, J. Li, and C. Cao, “Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light,” Computers & Security, vol. 132, p. 103345, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167404823002559
- Y. Man, M. Li, and R. Gerdes, “GhostImage: Remote perception attacks against camera-based image classification systems,” in 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), Oct. 2020, pp. 317–332. [Online]. Available: https://www.usenix.org/conference/raid2020/presentation/man
- B. Nassi, D. Nassi, R. Ben-Netanel, Y. Mirsky, O. Drokin, and Y. Elovici, “Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems.” IACR Cryptol. ePrint Arch., vol. 2020, p. 85, 2020.
- J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, “Remote Attacks on Automated Vehicles Sensors: Experiments on Camera and Lidar,” Black Hat Europe, vol. 11, p. 2015, 2015.
- Y. Cao, S. H. Bhupathiraju, P. Naghavi, T. Sugawara, Z. M. Mao, and S. Rampazzi, “You Can’t See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks,” in USENIX Security Symposium, 2023.
- Y. Cao, C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao, “Adversarial Sensor Attack on Lidar-Based Perception in Autonomous Driving,” in 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS), 2019, pp. 2267–2281.
- Hocheol Shin and Dohyun Kim and Yujin Kwon and Yongdae Kim, “Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications,” Cryptology ePrint Archive, Paper 2017/613, 2017, https://eprint.iacr.org/2017/613. [Online]. Available: https://eprint.iacr.org/2017/613
- J. Hayes, “On Visible Adversarial Perturbations & Digital Watermarking,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 1597–1604.
- S. Gowal, K. D. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, R. Arandjelovic, T. Mann, and P. Kohli, “Scalable Verified Training for Provably Robust Image Classification,” in IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4842–4851.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” in International Conference on Learning Representation (ICLR), 2018.
- C. Yu, J. Chen, Y. Xue, Y. Liu, W. Wan, J. Bao, and H. Ma, “Defending against Universal Adversarial Patches by Clipping Feature Norms,” in IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 16 434–16 442.
- P. yeh Chiang, R. Ni, A. Abdelkader, C. Zhu, C. Studor, and T. Goldstein, “Certified Defenses for Adversarial Patches,” in International Conference on Learning Representations (ICLR), 2020.
- C. Xiang, A. N. Bhagoji, V. Sehwag, and P. Mittal, “PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking,” in USENIX Security Symposium, 2021, pp. 2237–2254.
- A. Levine and S. Feizi, “(de)randomized smoothing for certifiable defense against patch attacks,” in International Conference on Neural Information Processing Systems (NIPS), 2020, pp. 6465–6475.
- J. Yoshida, “Teardown: Lessons Learned From Audi A8,” https://www.eetasia.com/teardown-lessons-learned-from-audi-a8/, 2020.
- MarkLines Co., Ltd., “BMW 320i Teardown: ADAS/onboard devices ,” https://www.marklines.com/en/report_all/rep2018_202004, 2020.
- S. Köhler, G. Lovisotto, S. Birnbach, R. Baker, and I. Martinovic, “They see me rollin’: Inherent vulnerability of the rolling shutter in cmos image sensors,” in Annual Computer Security Applications Conference, 2021, pp. 399–413.
- P. Jing, Q. Tang, Y. Du, L. Xue, X. Luo, T. Wang, S. Nie, and S. Wu, “Too Good to Be Safe: Tricking Lane Detection in Autonomous Driving with Crafted Perturbations,” in USENIX Security Symposium, 2021.
- H. Tian, B. Fowler, and A. Gamal, “Analysis of Temporal Noise in CMOS Photodiode Active Pixel Sensor,” IEEE Journal of Solid-State Circuits, vol. 36, no. 1, pp. 92–101, 2001.
- Department of Transportation, “Federal Highway Administration (FHWA),” https://www.govinfo.gov/content/pkg/FR-2009-12-16/pdf/E9-28322.pdf, 2009.
- R. T. Tan, “Specularity, specular reflectance,” in Computer Vision: A Reference Guide. Springer, 2021, pp. 1185–1188.
- Leopard Imaging, “LI-USB30-AR023ZWDR,” https://www.mouser.com/datasheet/2/233/LI-USB30-AR023ZWDR_datasheet-1101519.pdf, 2016.
- Apollo Auto, “Apollo Hardware Development Platform,” https://developer.apollo.auto/platform/hardware.html, 2016.
- CivilLaser, “780nm 1W 2W Powerful IR Laser Module Dot With Cooling Fan,” https://www.civillaser.com/index.php?main_page=product_info&products_id=477, 2018.
- M. Serkan and H. Kirkici, “Reshaping of a divergent elliptical gaussian laser beam into a circular, collimated, and uniform beam with aspherical lens design,” IEEE Sensors Journal, vol. 9, no. 1, pp. 36–44, 2009.
- F. Reda, J. Kontkanen, E. Tabellion, D. Sun, C. Pantofaru, and B. Curless, “FILM: Frame Interpolation for Large Motion,” in European Conference on Computer Vision (ECCV), 2022.
- J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, “Algorithms for Hyper-Parameter Optimization,” in International Conference on Neural Information Processing Systems (NIPS), vol. 24, 2011.
- T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama, “Optuna: A Next-Generation Hyperparameter Optimization Framework,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2019.
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” in International Conference on Neural Information Processing Systems (NIPS), 2015.
- F. Almutairy, T. Alshaabi, J. Nelson, and S. Wshah, “ARTS: Automotive Repository of Traffic Signs for the United States,” IEEE Transactions on Intelligent Transportation Systems, 2019.
- A. Mogelmose, M. M. Trivedi, and T. B. Moeslund, “Vision-based Traffic Sign Detection and Analysis for Intelligent Driver Assistance Systems: Perspectives and Survey,” IEEE Transactions on Intelligent Transportation Systems, 2012.
- J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018.
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common Objects in Context,” in European Conference on Computer Vision (ECCV), 2014, pp. 740–755.
- K. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Xu, Z. Zhang, D. Cheng, C. Zhu, T. Cheng, Q. Zhao, B. Li, X. Lu, R. Zhu, Y. Wu, J. Dai, J. Wang, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, “MMDetection: Open MMLab Detection Toolbox and Benchmark,” arXiv preprint arXiv:1906.07155, 2019.
- S. Sharma, “GTSRB - CNN (98% Test Accuracy),” https://www.kaggle.com/code/shivank856/gtsrb-cnn-98-test-accuracy, 2021.
- Raspberry Pi Ltd., “Raspberry Pi High Quality Camera,” https://datasheets.raspberrypi.com/hq-camera/hq-camera-product-brief.pdf, 2023.
- Sony Semiconductor Solutions Corporation, “Sony IMX477-AACK Image Sensor,” https://www.sony-semicon.com/files/62/pdf/p-13_IMX477-AACK_Flyer.pdf, 2018.
- Microsoft, “LifeCam HD-3000,” https://www.microsoft.com/en/accessories/products/webcams/lifecam-hd-3000, 2023.
- Leopard, “LI-USB30-OV10635-GMSL-057H Camera Module,” https://www.leopardimaging.com/product/autonomous-camera/maxim-gmsl-cameras/li-ov10635-gmsl/li-usb30-ov10635-gmsl-057h/, 2023.
- OmniVision, “OV10635 Image Sensor,” https://www.ovt.com/products/ov10635-n29y-pb/, 2018.
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4700–4708.
- M. Tan and Q. Le, “Efficientnet: Rethinking Model Scaling for Convolutional Neural Networks,” in International Conference on Machine Learning (ICML), 2019, pp. 6105–6114.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
- S. Liberatore, “Tesla cars tricked into autonomously accelerating up to 85 MPH in a 35 MPH zone while in cruise control using just a two-inch strip of electrical tape,” https://www.dailymail.co.uk/sciencetech/article-8021567/.html, 2020.
- J. W. Goodman, “Some fundamental properties of speckle,” JOSA, vol. 66, no. 11, pp. 1145–1150, 1976.
- J. Goodman, “Statistical properties of laser speckle patterns,” Laser speckle and related phenomena, pp. 9–75, 1975.
- C.-H. Yeh, P.-Y. Sung, C.-H. Kuo, and R.-N. Yeh, “Robust laser speckle recognition system for authenticity identification,” Optics express, vol. 20, no. 22, pp. 24 382–24 393, 2012.
- R. Shapley and P. Lennie, “Spatial frequency analysis in the visual system,” Annual review of neuroscience, vol. 8, no. 1, pp. 547–581, 1985.
- G. Srinivasan and G. Shobha, “Statistical texture analysis,” in Proceedings of world academy of science, engineering and technology, vol. 36, no. December, 2008, pp. 1264–1269.
- H.-D. Cheng, X. H. Jiang, Y. Sun, and J. Wang, “Color image segmentation: advances and prospects,” Pattern recognition, vol. 34, no. 12, pp. 2259–2281, 2001.
- J. Wang and Y. Su, “Fast detection of gpr objects with cross correlation and hough transform,” Progress In Electromagnetics Research C, vol. 38, pp. 229–239, 2013.
- Z. Yang, “Fast template matching based on normalized cross correlation with centroid bounding,” in 2010 International Conference on Measuring Technology and Mechatronics Automation, vol. 2. IEEE, 2010, pp. 224–227.
- SAE International, “SAE Levels of Driving Automation Refined for Clarity and International Audience,” https://www.sae.org/blog/sae-j3016-update, 2021.
- “Laser Safety Facts,” https://www.lasersafetyfacts.com/laserclasses.html.
- The University of Chicago, “Laser Safety Calculations Formulas for Calculating the MPE, NOHD, and NHZ,” https://d3qi0qp55mx5f5.cloudfront.net/researchsafety/docs/Laser_Safety_Calculations.pdf?mtime=1610127144, 2023.