Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Obscured Wildfire Flame Detection By Temporal Analysis of Smoke Patterns Captured by Unmanned Aerial Systems (2307.00104v1)

Published 30 Jun 2023 in cs.CV, cs.AI, and cs.LG

Abstract: This research paper addresses the challenge of detecting obscured wildfires (when the fire flames are covered by trees, smoke, clouds, and other natural barriers) in real-time using drones equipped only with RGB cameras. We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences. Our approach utilizes an encoder-decoder architecture based on deep convolutional neural network architecture with a pre-trained CNN encoder and 3D convolutions for decoding while using sequential stacking of features to exploit temporal variations. The predicted fire locations can assist drones in effectively combating forest fires and pinpoint fire retardant chemical drop on exact flame locations. We applied our method to a curated dataset derived from the FLAME2 dataset that includes RGB video along with IR video to determine the ground truth. Our proposed method has a unique property of detecting obscured fire and achieves a Dice score of 85.88%, while achieving a high precision of 92.47% and classification accuracy of 90.67% on test data showing promising results when inspected visually. Indeed, our method outperforms other methods by a significant margin in terms of video-level fire classification as we obtained about 100% accuracy using MobileNet+CBAM as the encoder backbone.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in Proceedings of the IEEE international conference on computer vision, pp. 4489–4497, 2015.
  2. A. Shamsoshoara, F. Afghah, A. Razi, L. Zheng, P. Z. Fulé, and E. Blasch, “Aerial imagery pile burn detection using deep learning: The flame dataset,” Computer Networks, vol. 193, p. 108001, 2021.
  3. W. Lee, S. Kim, Y.-T. Lee, H.-W. Lee, and M. Choi, “Deep neural networks for wild fire detection with unmanned aerial vehicle,” in 2017 IEEE International Conference on Consumer Electronics (ICCE), pp. 252–253, 2017.
  4. Z. Jiao, Y. Zhang, L. Mu, J. Xin, S. Jiao, H. Liu, and D. Liu, “A yolov3-based learning strategy for real-time uav-based forest fire detection,” in 2020 Chinese Control And Decision Conference (CCDC), pp. 4963–4967, 2020.
  5. R. Xu, H. Lin, K. Lu, L. Cao, and Y. Liu, “A forest fire detection system based on ensemble learning,” Forests, vol. 12, no. 2, p. 217, 2021.
  6. Y. Zhao, J. Ma, X. Li, and J. Zhang, “Saliency detection and deep learning-based wildfire identification in uav imagery,” Sensors, vol. 18, no. 3, p. 712, 2018.
  7. L. Zhang, M. Wang, Y. Fu, and Y. Ding, “A forest fire recognition method using uav images based on transfer learning,” Forests, vol. 13, no. 7, p. 975, 2022.
  8. Y. Wang, C. Hua, W. Ding, and R. Wu, “Real-time detection of flame and smoke using an improved yolov4 network,” Signal, Image and Video Processing, vol. 16, no. 4, pp. 1109–1116, 2022.
  9. B. Hopkins, L. O’Neill, F. Afghah, A. Razi, E. Rowell, A. Watts, P. Fule, and J. Coen, “Flame 2: Fire detection and modeling: Aerial multi-spectral image dataset,” 2022.
  10. Q. Zhang, J. Xu, L. Xu, and H. Guo, “Deep convolutional neural networks for forest fire detection,” in 2016 International Forum on Management, Education and Information Technology Application, pp. 568–575, Atlantis Press, 2016.
  11. A. Dewangan, Y. Pande, H.-W. Braun, F. Vernon, I. Perez, I. Altintas, G. W. Cottrell, and M. H. Nguyen, “Figlib & smokeynet: Dataset and deep learning model for real-time wildland fire smoke detection,” Remote Sensing, vol. 14, no. 4, p. 1007, 2022.
  12. C. Yuan, Z. Liu, and Y. Zhang, “Fire detection using infrared images for uav-based forest fire surveillance,” in 2017 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 567–572, IEEE, 2017.
  13. N. Ya’acob, M. S. M. Najib, N. Tajudin, A. L. Yusof, and M. Kassim, “Image processing based forest fire detection using infrared camera,” in Journal of Physics: Conference Series, vol. 1768, p. 012014, IOP Publishing, 2021.
  14. X. Chen, B. Hopkins, H. Wang, L. O’Neill, F. Afghah, A. Razi, P. Fulé, J. Coen, E. Rowell, and A. Watts, “Wildland fire detection and monitoring using a drone-collected rgb/ir image dataset,” IEEE Access, vol. 10, pp. 121301–121317, 2022.
  15. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015.
  16. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” 2015.
  17. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning, pp. 448–456, pmlr, 2015.
  18. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  19. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.
  20. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, pp. 6105–6114, PMLR, 2019.
  21. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” 2017.
  22. A. G. Roy, N. Navab, and C. Wachinger, “Recalibrating fully convolutional networks with spatial and channel “squeeze and excitation” blocks,” IEEE transactions on medical imaging, vol. 38, no. 2, pp. 540–549, 2018.
  23. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), pp. 3–19, 2018.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com