Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches Generation (2312.16907v2)

Published 28 Dec 2023 in cs.CV

Abstract: Object detection is a fundamental task in various applications ranging from autonomous driving to intelligent security systems. However, recognition of a person can be hindered when their clothing is decorated with carefully designed graffiti patterns, leading to the failure of object detection. To achieve greater attack potential against unknown black-box models, adversarial patches capable of affecting the outputs of multiple-object detection models are required. While ensemble models have proven effective, current research in the field of object detection typically focuses on the simple fusion of the outputs of all models, with limited attention being given to developing general adversarial patches that can function effectively in the physical world. In this paper, we introduce the concept of energy and treat the adversarial patches generation process as an optimization of the adversarial patches to minimize the total energy of the ``person'' category. Additionally, by adopting adversarial training, we construct a dynamically optimized ensemble model. During training, the weight parameters of the attacked target models are adjusted to find the balance point at which the generated adversarial patches can effectively attack all target models. We carried out six sets of comparative experiments and tested our algorithm on five mainstream object detection models. The adversarial patches generated by our algorithm can reduce the recognition accuracy of YOLOv2 and YOLOv3 to 13.19\% and 29.20\%, respectively. In addition, we conducted experiments to test the effectiveness of T-shirts covered with our adversarial patches in the physical world and could achieve that people are not recognized by the object detection model. Finally, leveraging the Grad-CAM tool, we explored the attack mechanism of adversarial patches from an energetic perspective.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. N. Akhtar and A. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” IEEE Access, vol. 6, pp. 14 410–14 430, 2018.
  2. X. Li, Y. Li, Z. Feng, Z. Wang, and Q. Pan, “Ats-o2a: A state-based adversarial attack strategy on deep reinforcement learning,” Computers & Security, vol. 129, p. 103259, 2023.
  3. Y. Li, Q. Pan, and E. Cambria, “Deep-attack over the deep reinforcement learning,” Knowledge-Based Systems, vol. 250, p. 108965, 2022.
  4. Y. Li, W. Zhao, E. Cambria, S. Wang, and S. Eger, “Graph routing between capsules,” Neural Networks, vol. 143, pp. 345–354, 2021.
  5. Z. Wang, Y. Li, S. Wu, Y. Zhou, L. Yang, Y. Xu, T. Zhang, and Q. Pan, “A survey on cybersecurity attacks and defenses for unmanned aerial systems,” Journal of Systems Architecture, vol. 138, p. 102870, 2023.
  6. S. Wu, Y. Li, Z. Wang, Z. Tan, and Q. Pan, “A highly interpretable framework for generic low-cost uav attack detection,” IEEE Sensors Journal, vol. 23, no. 7, pp. 7288–7300, 2023.
  7. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.6572
  8. N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in IEEE European Symposium on Security and Privacy, EuroS&P 2016, Saarbrücken, Germany, March 21-24, 2016.   IEEE, 2016, pp. 372–387. [Online]. Available: https://doi.org/10.1109/EuroSP.2016.36
  9. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2014. [Online]. Available: http://arxiv.org/abs/1312.6199
  10. T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,” CoRR, vol. abs/1712.09665, 2017. [Online]. Available: http://arxiv.org/abs/1712.09665
  11. M. Lee and J. Z. Kolter, “On physical adversarial patches for object detection,” CoRR, vol. abs/1906.11897, 2019. [Online]. Available: http://arxiv.org/abs/1906.11897
  12. D. Wang, T. Jiang, J. Sun, W. Zhou, Z. Gong, X. Zhang, W. Yao, and X. Chen, “FCA: learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack,” in Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022.   AAAI Press, 2022, pp. 2414–2422. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/20141
  13. N. Suryanto, Y. Kim, H. Kang, H. T. Larasati, Y. Yun, T. Le, H. Yang, S. Oh, and H. Kim, “DTA: physical camouflage attacks using differentiable transformation network,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022.   IEEE, 2022, pp. 15 284–15 293. [Online]. Available: https://doi.org/10.1109/CVPR52688.2022.01487
  14. S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017.   IEEE Computer Society, 2017, pp. 86–94. [Online]. Available: https://doi.org/10.1109/CVPR.2017.17
  15. N. Papernot, P. D. McDaniel, I. J. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, AsiaCCS 2017, Abu Dhabi, United Arab Emirates, April 2-6, 2017, R. Karri, O. Sinanoglu, A. Sadeghi, and X. Yi, Eds.   ACM, 2017, pp. 506–519. [Online]. Available: https://doi.org/10.1145/3052973.3053009
  16. J. Lin, C. Song, K. He, L. Wang, and J. E. Hopcroft, “Nesterov accelerated gradient and scale invariance for adversarial attacks,” in 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.   OpenReview.net, 2020. [Online]. Available: https://openreview.net/forum?id=SJlHwkBYDH
  17. Z. Cai, Y. Tan, and M. S. Asif, “Ensemble-based blackbox attacks on dense prediction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4045–4055.
  18. S. Thys, W. Van Ranst, and T. Goedemé, “Fooling automated surveillance cameras: adversarial patches to attack person detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2019, pp. 0–0.
  19. Z. Wu, S.-N. Lim, L. S. Davis, and T. Goldstein, “Making an invisibility cloak: Real world adversarial attacks on object detectors,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16.   Springer, 2020, pp. 1–17.
  20. K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P.-Y. Chen, Y. Wang, and X. Lin, “Adversarial t-shirt! evading person detectors in a physical world,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16.   Springer, 2020, pp. 665–681.
  21. Z. Hu, S. Huang, X. Zhu, F. Sun, B. Zhang, and X. Hu, “Adversarial texture for fooling person detectors in the physical world,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 307–13 316.
  22. J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.
  23. J. Redmon, “Yolov3: An incremental improvement,” CoRR, vol. abs/1804.02767, 2018. [Online]. Available: http://arxiv.org/abs/1804.02767
  24. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
  25. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015.
  26. S. Liang, L. Li, Y. Fan, X. Jia, J. Li, B. Wu, and X. Cao, “A large-scale multiple-objective method for black-box attack against object detection,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV.   Springer, 2022, pp. 619–636.
  27. H. Huang, Y. Wang, Z. Chen, Z. Tang, W. Zhang, and K.-K. Ma, “Rpattack: Refined patch attack on general object detectors,” in 2021 IEEE International Conference on Multimedia and Expo (ICME).   IEEE, 2021, pp. 1–6.
  28. Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang, “A tutorial on energy-based learning,” Predicting structured data, vol. 1, no. 0, 2006.
  29. W. Grathwohl, K.-C. Wang, J.-H. Jacobsen, D. Duvenaud, M. Norouzi, and K. Swersky, “Your classifier is secretly an energy based model and you should treat it like one,” arXiv preprint arXiv:1912.03263, 2019.
  30. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  31. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  32. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
  33. N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp).   Ieee, 2017, pp. 39–57.
  34. N. Papernot, P. D. McDaniel, and I. J. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” CoRR, vol. abs/1605.07277, 2016. [Online]. Available: http://arxiv.org/abs/1605.07277
  35. Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” arXiv preprint arXiv:1611.02770, 2016.
  36. Z. Yuan, J. Zhang, Y. Jia, C. Tan, T. Xue, and S. Shan, “Meta gradient adversarial attack,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7748–7757.
  37. Z. Huang and T. Zhang, “Black-box adversarial attack with transferable model-based embedding,” in 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.   OpenReview.net, 2020. [Online]. Available: https://openreview.net/forum?id=SJxhNTNYwB
  38. N. A. Lord, R. Müller, and L. Bertinetto, “Attacking deep networks with surrogate-based adversarial black-box methods is easy,” in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.   OpenReview.net, 2022. [Online]. Available: https://openreview.net/forum?id=Zf4ZdI4OQPV
  39. Y. Tashiro, Y. Song, and S. Ermon, “Diversity can be transferred: Output diversification for white- and black-box attacks,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/30da227c6b5b9e2482b6b221c711edfd-Abstract.html
  40. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 acm sigsac conference on computer and communications security, 2016, pp. 1528–1540.
  41. A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in International conference on machine learning.   PMLR, 2018, pp. 284–293.
  42. G. Donato and S. Belongie, “Approximate thin plate spline mappings,” in Computer Vision—ECCV 2002: 7th European Conference on Computer Vision Copenhagen, Denmark, May 28–31, 2002 Proceedings, Part III 7.   Springer, 2002, pp. 21–31.
  43. D. Song, K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, F. Tramer, A. Prakash, and T. Kohno, “Physical adversarial examples for object detectors,” in 12th USENIX workshop on offensive technologies (WOOT 18), 2018.
  44. S. Lu, I. Tsaknakis, and M. Hong, “Block alternating optimization for non-convex min-max problems: algorithms and applications in signal processing and communications,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 4754–4758.
  45. Q. Qian, S. Zhu, J. Tang, R. Jin, B. Sun, and H. Li, “Robust optimization over multiple domains,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019, pp. 4739–4746.
  46. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), 20-26 June 2005, San Diego, CA, USA.   IEEE Computer Society, 2005, pp. 886–893. [Online]. Available: https://doi.org/10.1109/CVPR.2005.177
  47. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.6980
  48. Y.-C.-T. Hu, B.-H. Kung, D. S. Tan, J.-C. Chen, K.-L. Hua, and W.-H. Cheng, “Naturalistic physical adversarial patch for object detectors,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7848–7857.
  49. L. Huang, C. Gao, Y. Zhou, C. Xie, A. L. Yuille, C. Zou, and N. Liu, “Universal physical camouflage attacks on object detectors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 720–729.
  50. S. Vellaichamy, M. Hull, Z. J. Wang, N. Das, S. Peng, H. Park, and D. H. P. Chau, “Detectordetective: Investigating the effects of adversarial examples on object detectors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 21 484–21 491.
Citations (3)

Summary

We haven't generated a summary for this paper yet.