Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aerial Imagery Pile burn detection using Deep Learning: the FLAME dataset (2012.14036v1)

Published 28 Dec 2020 in cs.CV, cs.AI, cs.LG, and eess.IV

Abstract: Wildfires are one of the costliest and deadliest natural disasters in the US, causing damage to millions of hectares of forest resources and threatening the lives of people and animals. Of particular importance are risks to firefighters and operational forces, which highlights the need for leveraging technology to minimize danger to people and property. FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) offers a dataset of aerial images of fires along with methods for fire detection and segmentation which can help firefighters and researchers to develop optimal fire management strategies. This paper provides a fire image dataset collected by drones during a prescribed burning piled detritus in an Arizona pine forest. The dataset includes video recordings and thermal heatmaps captured by infrared cameras. The captured videos and images are annotated and labeled frame-wise to help researchers easily apply their fire detection and modeling algorithms. The paper also highlights solutions to two machine learning problems: (1) Binary classification of video frames based on the presence [and absence] of fire flames. An Artificial Neural Network (ANN) method is developed that achieved a 76% classification accuracy. (2) Fire detection using segmentation methods to precisely determine fire borders. A deep learning method is designed based on the U-Net up-sampling and down-sampling approach to extract a fire mask from the video frames. Our FLAME method approached a precision of 92% and a recall of 84%. Future research will expand the technique for free burning broadcast fire using thermal images.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Peter Z Fulé (1 paper)
  2. Alireza Shamsoshoara (13 papers)
  3. Fatemeh Afghah (90 papers)
  4. Abolfazl Razi (63 papers)
  5. Liming Zheng (9 papers)
  6. Erik Blasch (46 papers)
Citations (193)

Summary

Aerial Imagery Pile Burn Detection Using Deep Learning: The FLAME Dataset

The paper explores innovative approaches to detecting fires through aerial imagery by leveraging state-of-the-art machine learning techniques. This research introduces the FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) dataset, a pioneering resource tailored for fire detection and segmentation via drone-captured images and videos. This dataset holds potential for advancing both practical fire management strategies and fire modeling methodologies.

Dataset and Methodology

The FLAME dataset serves as the cornerstone of this research, comprising video recordings and thermal heatmaps obtained using drones during controlled pile burns in Arizona forests. These recordings are meticulously annotated at the frame level, enhancing their utility for developing and testing fire detection algorithms.

Two primary deep learning problems are posited using this dataset:

  1. Binary Classification: The paper addresses the automated classification of video frames into fire and non-fire categories. An Artificial Neural Network (ANN) was constructed, achieving a classification accuracy of 76%. While offering considerable promise, this accuracy reflects the challenges inherent in real-time fire detection, such as environmental variations and image resolution.
  2. Fire Segmentation: The research advances a deep learning method based on the U-Net architecture for precise fire boundary detection within frames. The U-Net model's performance, characterized by a precision of 92% and a recall of 84%, demonstrates its efficacy in extracting accurate fire masks. This approach significantly contributes to imaging-based fire modeling and offers high resolution in fire localization.

Implications and Future Directions

In the field of practical implications, the findings of this paper hold significant relevance for enhancing the safety and efficiency of firefighting operations. By facilitating early detection and accurate mapping of fire boundaries, these technologies can enable more efficient deployment of firefighting resources, thus potentially minimizing risks to human life and property.

Theoretically, the introduction of an annotated aerial imagery dataset specifically for fire analysis is noteworthy, as it fills a gap that previously limited research in this domain. The precision of segmentation and detection achieved exemplifies the potential for deep learning methods, particularly U-Net-based architectures, in image-based environmental monitoring.

Looking forward, the paper acknowledges the need for expanding this dataset to include free burning broadcast fires. Additionally, potential research extensions could include the integration of thermal and RGB images, exploration of computational efficiency of on-board processing using edge devices, and the refinement of segmentation techniques to adapt to varied fire scenarios. Moreover, incorporating environmental context data and multi-modal imaging could augment the situational awareness provided by the technological framework developed herein.

Such progressive enhancements will likely contribute to further autonomous capabilities in unmanned aerial monitoring and more sophisticated predictive firefighting strategies, marking important advancements in disaster management technology.