Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Flexible Physical Camouflage Generation Based on a Differential Approach (2402.13575v3)

Published 21 Feb 2024 in cs.CV and cs.AI

Abstract: This study introduces a novel approach to neural rendering, specifically tailored for adversarial camouflage, within an extensive 3D rendering framework. Our method, named FPA, goes beyond traditional techniques by faithfully simulating lighting conditions and material variations, ensuring a nuanced and realistic representation of textures on a 3D target. To achieve this, we employ a generative approach that learns adversarial patterns from a diffusion model. This involves incorporating a specially designed adversarial loss and covert constraint loss to guarantee the adversarial and covert nature of the camouflage in the physical world. Furthermore, we showcase the effectiveness of the proposed camouflage in sticker mode, demonstrating its ability to cover the target without compromising adversarial information. Through empirical and physical experiments, FPA exhibits strong performance in terms of attack success rate and transferability. Additionally, the designed sticker-mode camouflage, coupled with a concealment constraint, adapts to the environment, yielding diverse styles of texture. Our findings highlight the versatility and efficacy of the FPA approach in adversarial camouflage applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Synthesizing robust adversarial examples. In International Conference on Machine Learning, pages 284–293. Proceedings of Machine Learning Research, 2018.
  2. Fred L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(6):567–585, 1989.
  3. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017.
  4. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 6154–6162, 2018.
  5. End-to-end object detection with transformers. In European Conference on Computer Vision, pages 213–229. Springer, 2020.
  6. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18, pages 52–68. Springer, 2019.
  7. Physical attack on monocular depth estimation with optimal adversarial patches. In European Conference on Computer Vision, pages 514–532. Springer, 2022.
  8. Feature extraction based on the bhattacharyya distance. Pattern Recognition, 36(8):1703–1709, 2003.
  9. Approximate thin plate spline mappings. In Computer Vision—ECCV 2002: 7th European Conference on Computer Vision Copenhagen, Denmark, May 28–31, 2002 Proceedings, Part III 7, pages 21–31. Springer, 2002.
  10. Carla: An open urban driving simulator. In Conference on Robot Learning, pages 1–16. Proceedings of Machine Learning Research, 2017.
  11. Dpa: Learning robust physical adversarial camouflages for object detectors. arXiv preprint arXiv:2109.00124, 2, 2021.
  12. The pascal visual object classes challenge 2007 (voc2007) development kit. Int. J. Comput. Vis, 88(2):303–338, 2010.
  13. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1625–1634, 2018.
  14. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3828–3838, 2019.
  15. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  16. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  17. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 2961–2969, 2017.
  18. Naturalistic physical adversarial patch for object detectors. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7848–7857, 2021.
  19. Physically realizable natural-looking clothing textures evade person detectors via 3d modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16975–16984, 2023.
  20. Adversarial texture for fooling person detectors in the physical world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13307–13316, 2022.
  21. Universal physical camouflage attacks on object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 720–729, 2020.
  22. ultralytics/yolov5: v7. 0-yolov5 sota realtime instance segmentation. Zenodo, 2022.
  23. Neural 3d mesh renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3907–3916, 2018.
  24. On physical adversarial patches for object detection. arxiv. arXiv preprint arXiv:1906.11897, 2019.
  25. Ats-o2a: A state-based adversarial attack strategy on deep reinforcement learning. Computers & Security, 129:103259, 2023.
  26. Deep-attack over the deep reinforcement learning. Knowledge-Based Systems, 250:108965, 2022.
  27. Few pixels attacks with generative model. Pattern Recognition, 144:109849, 2023.
  28. Graph routing between capsules. Neural Networks, 143:345–354, 2021.
  29. Diffusion to confusion: Naturalistic adversarial patch generation based on diffusion model for object detector. arXiv preprint arXiv:2307.08076, 2023.
  30. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014.
  31. Ssd: Single shot multibox detector. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pages 21–37. Springer, 2016.
  32. Dpatch: An adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299, 2018.
  33. Detrs beat yolos on real-time object detection. arXiv preprint arXiv:2304.08069, 2023.
  34. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
  35. On the robustness of vision transformers to adversarial examples. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7838–7847, 2021.
  36. On improving adversarial transferability of vision transformers. arXiv preprint arXiv:2106.04169, 2021.
  37. Kris Nicholson. Gpu based algorithms for terrain texturing. Master’s thesis, University of Canterbury, Christchurch, New Zealand, November 2008.
  38. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
  39. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28, 2015.
  40. Physical adversarial examples for object detectors. In 12th USENIX workshop on offensive technologies (WOOT 18), 2018.
  41. Dta: Physical camouflage attacks using differentiable transformation network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15305–15314, 2022.
  42. Active: Towards highly transferable 3d physical camouflage for universal and robust vehicle evasion. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4305–4314, 2023.
  43. Legitimate adversarial patches: Evading human eyes and detection models in the physical world. In Proceedings of the 29th ACM International Conference on Multimedia, pages 5307–5315, 2021.
  44. An augmentation strategy for medical image processing based on statistical shape model and 3d thin plate spline for deep learning. IEEE Access, 7:133111–133121, 2019.
  45. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops, pages 0–0, 2019.
  46. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7464–7475, 2023.
  47. Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36(2), pages 2414–2422, 2022.
  48. Dual attention suppression attack: Generate adversarial camouflage in physical world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8565–8574, 2021.
  49. Can 3d adversarial logos cloak humans? arXiv preprint arXiv:2006.14655, 2020.
  50. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  51. Self-supervised monocular depth hints. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2162–2171, 2019.
  52. The temporal opportunist: Self-supervised multi-frame monocular depth. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1164–1174, 2021.
  53. Physical adversarial attack on vehicle detector in the carla simulator. arXiv preprint arXiv:2007.16118, 2020.
  54. Making an invisibility cloak: Real world adversarial attacks on object detectors. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pages 1–17. Springer, 2020.
  55. Improving transferability of adversarial patches on face recognition with generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11845–11854, 2021.
  56. Adversarial t-shirt! evading person detectors in a physical world. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pages 665–681. Springer, 2020.
  57. Camou: Learning physical vehicle camouflages to adversarially attack detectors in the wild. In International Conference on Learning Representations, 2018.
  58. Camou: Learning a vehicle camouflage for physical adversarial attack on object detections in the wild. International Conference on Learning Representation, 2019.
  59. Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15345–15354, 2022.
  60. Adversarial mask: Real-world universal adversarial attack on face recognition models. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 304–320. Springer, 2022.
  61. Object detection in 20 years: A survey. Proceedings of the IEEE, 2023.
Citations (4)

Summary

We haven't generated a summary for this paper yet.