Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EVD4UAV: An Altitude-Sensitive Benchmark to Evade Vehicle Detection in UAV (2403.05422v2)

Published 8 Mar 2024 in cs.CV

Abstract: Vehicle detection in Unmanned Aerial Vehicle (UAV) captured images has wide applications in aerial photography and remote sensing. There are many public benchmark datasets proposed for the vehicle detection and tracking in UAV images. Recent studies show that adding an adversarial patch on objects can fool the well-trained deep neural networks based object detectors, posing security concerns to the downstream tasks. However, the current public UAV datasets might ignore the diverse altitudes, vehicle attributes, fine-grained instance-level annotation in mostly side view with blurred vehicle roof, so none of them is good to study the adversarial patch based vehicle detection attack problem. In this paper, we propose a new dataset named EVD4UAV as an altitude-sensitive benchmark to evade vehicle detection in UAV with 6,284 images and 90,886 fine-grained annotated vehicles. The EVD4UAV dataset has diverse altitudes (50m, 70m, 90m), vehicle attributes (color, type), fine-grained annotation (horizontal and rotated bounding boxes, instance-level mask) in top view with clear vehicle roof. One white-box and two black-box patch based attack methods are implemented to attack three classic deep neural networks based object detectors on EVD4UAV. The experimental results show that these representative attack methods could not achieve the robust altitude-insensitive attack performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. A. Bansal, N. Agrawal, and K. Singh, “Rate-splitting multiple access for uav-based ris-enabled interference-limited vehicular communication system,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 1, pp. 936–948, 2022.
  2. S. P. Bharati, Y. Wu, Y. Sui, C. Padgett, and G. Wang, “Real-time obstacle detection and tracking for sense-and-avoid mechanism in uavs,” IEEE Transactions on Intelligent Vehicles, vol. 3, no. 2, pp. 185–197, 2018.
  3. D. Du, Y. Qi, H. Yu, Y. Yang, K. Duan, G. Li, W. Zhang, Q. Huang, and Q. Tian, “The unmanned aerial vehicle benchmark: Object detection and tracking,” in European Conference on Computer Vision, 2018, pp. 370–386.
  4. J. Wang, S. Simeonova, and M. Shahbazi, “Orientation-and scale-invariant multi-vehicle detection and tracking from unmanned aerial videos,” Remote Sensing, vol. 11, no. 18, p. 2155, 2019.
  5. Y. Lyu, G. Vosselman, G.-S. Xia, A. Yilmaz, and M. Y. Yang, “Uavid: A semantic segmentation dataset for uav imagery,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 165, pp. 108–119, 2020.
  6. W. Wang, H. Chen, X. Zhang, W. Zhou, and W. Shi, “Aprus: An airborne altitude-adaptive purpose-related uav system for object detection,” in IEEE/CVF Conference on High Performance Computing.   IEEE, 2022, pp. 402–411.
  7. S. Thys, W. Van Ranst, and T. Goedemé, “Fooling automated surveillance cameras: adversarial patches to attack person detection,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 0–0.
  8. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
  9. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in Neural Information Processing Systems, vol. 28, 2015.
  10. Z. Zuo, C. Liu, Q.-L. Han, and J. Song, “Unmanned aerial vehicles: Control methods and future challenges,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 4, pp. 601–614, 2022.
  11. Y. Zeng, R. Zhang, and T. J. Lim, “Wireless communications with unmanned aerial vehicles: Opportunities and challenges,” IEEE Communications Magazine, vol. 54, no. 5, pp. 36–42, 2016.
  12. H. Shakhatreh, A. H. Sawalmeh, A. Al-Fuqaha, Z. Dou, E. Almaita, I. Khalil, N. S. Othman, A. Khreishah, and M. Guizani, “Unmanned aerial vehicles (uavs): A survey on civil applications and key research challenges,” IEEE Access, vol. 7, pp. 48 572–48 634, 2019.
  13. R. Xu, X. Xia, J. Li, H. Li, S. Zhang, Z. Tu, Z. Meng, H. Xiang, X. Dong, R. Song et al., “V2v4real: A real-world large-scale dataset for vehicle-to-vehicle cooperative perception,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 13 712–13 722.
  14. H. Sun, Y. Lin, Q. Zou, S. Song, J. Fang, and H. Yu, “Convolutional neural networks based remote sensing scene classification under clear and cloudy environments,” in IEEE/CVF International Conference on Computer Vision Workshop, 2021, pp. 713–720.
  15. H. Sun, J. Ma, Q. Guo, Q. Zou, S. Song, Y. Lin, and H. Yu, “Coarse-to-fine task-driven inpainting for geoscience images,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  16. H. Sun, L. Fu, J. Li, Q. Guo, Z. Meng, T. Zhang, Y. Lin, and H. Yu, “Defense against adversarial cloud attack on remote sensing salient object detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 8345–8354.
  17. X. Liu, J. Li, J. Ma, H. Sun, Z. Xu, T. Zhang, and H. Yu, “Deep transfer learning for intelligent vehicle perception: a survey,” Green Energy and Intelligent Transportation, p. 100125, 2023.
  18. J. Li, Z. Xu, L. Fu, X. Zhou, and H. Yu, “Domain adaptation from daytime to nighttime: A situation-sensitive vehicle detection and traffic flow parameter estimation framework,” Transportation Research Part C: Emerging Technologies, vol. 124, p. 102946, 2021.
  19. C. Szegedy, A. Toshev, and D. Erhan, “Deep neural networks for object detection,” Advances in Neural Information Processing Systems, vol. 26, 2013.
  20. G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics yolov8,” 2023. [Online]. Available: https://github.com/ultralytics/ultralytics
  21. H. Hu, J. Gu, Z. Zhang, J. Dai, and Y. Wei, “Relation networks for object detection,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3588–3597.
  22. J. Li, R. Xu, J. Ma, Q. Zou, J. Ma, and H. Yu, “Domain adaptive object detection for autonomous driving under foggy weather,” in IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 612–622.
  23. Y.-C.-T. Hu, B.-H. Kung, D. S. Tan, J.-C. Chen, K.-L. Hua, and W.-H. Cheng, “Naturalistic physical adversarial patch for object detectors,” in IEEE/CVF International Conference on Computer Vision, 2021, pp. 7848–7857.
  24. C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille, “Adversarial examples for semantic segmentation and object detection,” in IEEE International Conference on Computer Vision, 2017, pp. 1369–1378.
  25. A. J. Bose and P. Aarabi, “Adversarial attacks on face detectors using neural net based constrained optimization,” in International Workshop on Multimedia Signal Processing.   IEEE, 2018, pp. 1–6.
  26. Y. Li, X. Bian, M.-C. Chang, and S. Lyu, “Exploring the vulnerability of single shot module in object detectors via imperceptible background patches,” arXiv preprint arXiv:1809.05966, 2018.
  27. X. Liu, H. Yang, Z. Liu, L. Song, H. Li, and Y. Chen, “Dpatch: An adversarial patch attack on object detectors,” arXiv preprint arXiv:1806.02299, 2018.
  28. Y. Zhao, H. Zhu, R. Liang, Q. Shen, S. Zhang, and K. Chen, “Seeing isn’t believing: Towards more robust adversarial attack against real world object detectors,” in 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 1989–2004.
  29. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European Conference on Computer Vision.   Springer, 2020, pp. 213–229.
  30. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning.   PMLR, 2021, pp. 8748–8763.
  31. A. Gretton, K. Borgwardt, M. Rasch, B. Schölkopf, and A. Smola, “A kernel method for the two-sample-problem,” Advances in Neural Information Processing Systems, vol. 19, 2006.
Citations (2)

Summary

We haven't generated a summary for this paper yet.