Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors (2403.11875v1)
Abstract: Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications. However, uncontrolled access to restricted areas threatens privacy and security. Thus, prevention and detection of UAVs are pivotal to guarantee confidentiality and safety. Although active scanning, mainly based on radars, is one of the most accurate technologies, it can be expensive and less versatile than passive inspections, e.g., object recognition. Dynamic vision sensors (DVS) are bio-inspired event-based vision models that leverage timestamped pixel-level brightness changes in fast-moving scenes that adapt well to low-latency object detection. This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection. In particular, we propose a setup to exploit DVS as an alternative to RGB cameras in a real-time and low-power configuration. Our approach leverages the high-dynamic range (HDR) and background suppression of DVS and, when trained with various fast-moving drones, outperforms RGB input in suboptimal ambient conditions such as low illumination and fast-moving scenes. Our results show that F-UAV-D can (i) detect drones by using less than <15 W on average and (ii) perform real-time inference (i.e., <50 ms) by leveraging the CPU and GPU nodes of our edge computer.
- S. A. H. Mohsan, M. A. Khan, F. Noor, I. Ullah, and M. H. Alsharif, “Towards the unmanned aerial vehicles (uavs): A comprehensive review,” Drones, vol. 6, no. 6, p. 147, 2022.
- Y. Li, M. Liu, and D. Jiang, “Application of unmanned aerial vehicles in logistics: a literature review,” Sustainability, p. 14473, 2022.
- T. Fahey, A. Gardi, and R. Sabatini, “Integration of a uav-lidar system for remote sensing of co 2 concentrations in smart agriculture,” in IEEE/AIAA Digital Avionics Systems Conference (DASC). IEEE, 2021.
- A. Utsav, A. Abhishek, P. Suraj, and R. K. Badhai, “An iot based uav network for military applications,” in 2021 Sixth International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET). IEEE, 2021, pp. 122–125.
- M. Bonetto, P. Korshunov, G. Ramponi, and T. Ebrahimi, “Privacy in mini-drone based video surveillance,” in International conference and workshops on automatic face and gesture recognition. IEEE, 2015.
- X. Shi, C. Yang, W. Xie, C. Liang, Z. Shi, and J. Chen, “Anti-drone system with multiple surveillance technologies: Architecture, implementation, and challenges,” IEEE Communications Magazine, 2018.
- B. Taha and A. Shoufan, “Machine learning-based drone detection and classification: State-of-the-art in research,” IEEE access, 2019.
- S. A. Musa, R. Abdullah, A. Sali, A. Ismail, N. E. A. Rashid, I. P. Ibrahim, and A. A. Salah, “A review of copter drone detection using radar systems,” Def. S&T Tech. Bull, 2019.
- R. Fu, M. A. Al-Absi, K.-H. Kim, Y.-S. Lee, A. A. Al-Absi, and H.-J. Lee, “Deep learning-based drone classification using radar cross section signatures at mmwave frequencies,” IEEE Access, 2021.
- P. J. B. Morris and K. Hari, “Detection and localization of unmanned aircraft systems using millimeter-wave automotive radar sensors,” IEEE Sensors Letters, vol. 5, no. 6, pp. 1–4, 2021.
- Z. Shi, X. Chang, C. Yang, Z. Wu, and J. Wu, “An acoustic-based surveillance system for amateur drones detection and localization,” IEEE transactions on vehicular technology, 2020.
- M. A. Khan, H. Menouar, A. Eldeeb, A. Abu-Dayya, and F. D. Salim, “On the detection of unauthorized drones—techniques and future perspectives: A review,” IEEE Sensors Journal, 2022.
- A. Rozantsev, V. Lepetit, and P. Fua, “Detecting flying objects using a single moving camera,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 5, pp. 879–892, 2016.
- J. Terven and D. Cordova-Esparza, “A comprehensive review of yolo: From yolov1 to yolov8 and beyond,” preprint arXiv:2304.00501, 2023.
- H. R. Alsanad, A. Z. Sadik, O. N. Ucan, M. Ilyas, and O. Bayat, “Yolo-v3 based real-time drone detection algorithm,” Multimedia tools and applications, vol. 81, no. 18, pp. 26 185–26 198, 2022.
- S. Singha and B. Aydin, “Automated drone detection using yolov4,” Drones, vol. 5, no. 3, p. 95, 2021.
- L. Pascarella and M. Magno, “Grayscale and event-based sensor fusion for robust steering prediction for self-driving cars,” in 2023 IEEE Sensors Applications Symposium (SAS). IEEE, 2023, pp. 01–06.
- T. Serrano-Gotarredona and B. Linares-Barranco, “A 128×\,\times×128 1.5
- G. Gallego, J. E. Lund, E. Mueggler, H. Rebecq, T. Delbruck, and D. Scaramuzza, “Event-based, 6-dof camera tracking from photometric depth maps,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 10, pp. 2402–2412, 2018.
- G. Gallego, T. Delbrück, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. J. Davison, J. Conradt, K. Daniilidis et al., “Event-based vision: A survey,” IEEE transactions on pattern analysis and machine intelligence, 2020.
- Y. Sandamirskaya, M. Kaboli, J. Conradt, and T. Celikel, “Neuromorphic computing hardware and neural architectures for robotics,” Science Robotics, vol. 7, no. 67, p. eabl8419, 2022.
- “EV-UAV dataset,” 2023. [Online]. Available: https://zenodo.org/doi/10.5281/zenodo.10281436
- P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×\times× 128 120 db 15 μ𝜇\muitalic_μs latency asynchronous temporal contrast vision sensor,” IEEE Journal of Solid-State Circuits, vol. 43, no. 2, pp. 566–576, 2008.
- P. Lichtsteiner and T. Delbruck, “A 64x64 aer logarithmic temporal derivative silicon retina,” in Research in Microelectronics and Electronics, 2005 PhD, vol. 2. IEEE, 2005, pp. 202–205.
- “inivation,” https://inivation.com/, accessed: 2023-12-04.
- C. Brandli, L. Muller, and T. Delbruck, “Real-time, high-speed video decompression using a frame-and event-based davis sensor,” in International Symposium on Circuits and Systems (ISCAS). IEEE, 2014.
- A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza et al., “A low power, fully event-based gesture recognition system,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
- L. Everding and J. Conradt, “Low-latency line tracking using event-based dynamic vision sensors,” Frontiers in neurorobotics, 2018.
- G. Orchard, A. Jayawant, G. K. Cohen, and N. Thakor, “Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades,” Frontiers in Neuroscience, vol. 9, 2015. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fnins.2015.00437
- E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza, “The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and slam,” The International Journal of Robotics Research, vol. 36, no. 2, pp. 142–149, 2017.
- P. de Tournemire, D. Nitti, E. Perot, D. Migliore, and A. Sironi, “A Large Scale Event-based Detection Dataset for Automotive,” 2020. [Online]. Available: https://arxiv.org/abs/2001.08499
- J. Zhang, X. Yang, Y. Fu, X. Wei, B. Yin, and B. Dong, “Object Tracking by Jointly Exploiting Frame and Event Domain,” 2021. [Online]. Available: http://arxiv.org/abs/2109.09052
- X. Wang, J. Li, L. Zhu, Z. Zhang, Z. Chen, X. Li, Y. Wang, Y. Tian, and F. Wu, “Visevent: Reliable object tracking via collaboration of frame and event flows,” IEEE Transactions on Cybernetics, 2023.
- “Event-based Vision Evaluation Kits,” 2023. [Online]. Available: https://www.prophesee.ai/event-based-evaluation-kits/
- “AAEON BOXER-8251AI,” 2023. [Online]. Available: https://www.aaeon.com/en/p/nvidia-xavier-nx-embedded-box-pc-boxer-8251ai
- A. Paullada, I. D. Raji, E. M. Bender, E. Denton, and A. Hanna, “Data and its (dis) contents: A survey of dataset development and use in machine learning research,” Patterns, vol. 2, no. 11, 2021.
- “Label Studio: Data labeling software.” [Online]. Available: https://github.com/heartexlabs/label-studio
- J. Binas, D. Neil, S.-C. Liu, and T. Delbruck, “Ddd17: End-to-end davis driving dataset,” arXiv preprint arXiv:1711.01458, 2017.
- L. Di Stefano, S. Mattoccia, and F. Tombari, “Zncc-based template matching using bounded partial correlation,” Pattern recognition letters, vol. 26, no. 14, pp. 2129–2134, 2005.
- R. Juarez-Salazar, J. Zheng, and V. H. Diaz-Ramirez, “Distorted pinhole camera modeling and calibration,” Applied Optics, 2020.
- J. Rehder, J. Nikolic, T. Schneider, T. Hinzmann, and R. Siegwart, “Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes,” in 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 4304–4311.
- M. Muglikar, M. Gehrig, D. Gehrig, and D. Scaramuzza, “How to Calibrate Your Event Camera,” 2021. [Online]. Available: https://arxiv.org/abs/2105.12362
- Y. Collet and M. Kucherawy, “Zstandard compression and the application/zstd media type,” Tech. Rep., 2018.
- O. Rodeh, J. Bacik, and C. Mason, “Btrfs: The linux b-tree filesystem,” ACM Transactions on Storage (TOS), vol. 9, no. 3, pp. 1–32, 2013.
- J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
- R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
- “Ultralytics.” [Online]. Available: https://github.com/ultralytics
- “Tensorrt command-line wrapper: trtexec,” https://github.com/NVIDIA/TensorRT/tree/main/samples/trtexec, accessed: 2024-02-02.
- Jakub Mandula (2 papers)
- Jonas Kühne (7 papers)
- Luca Pascarella (21 papers)
- Michele Magno (118 papers)