FireNet Dataset for Fire Detection
- FireNet Dataset is a curated collection of static images and video frames, integrating RGB and infrared data to support fire detection and perimeter segmentation research.
- It features precise, expert-reviewed annotations and diverse scenarios, ensuring robust benchmarking for classification and segmentation models.
- Benchmark analyses demonstrate high F1 scores (92-95) and real-time inference (up to 20 fps), making it ideal for IoT fire detection and disaster response.
The FireNet Dataset is a specialized and openly available resource designed to facilitate research in automated fire detection and perimeter segmentation. Introduced in the development of the FireNet model for real-time IoT fire detection applications (Jadon et al., 2019) and extended for fire perimeter segmentation from aerial infrared video (Doshi et al., 2019), the dataset aggregates diverse media types and annotation protocols to provide benchmarks for classification and segmentation tasks. Its careful compilation, realistic scenario coverage, and rigorous labeling support the development and evaluation of both lightweight and production-grade neural networks targeted at embedded platforms or rapid disaster response contexts.
1. Dataset Structure and Media Types
The FireNet Dataset comprises both static images and video frames, curated to provide comprehensive coverage for distinct fire recognition and segmentation tasks. For classification, the dataset includes:
- Training set: 1,124 fire images and 1,301 non-fire images (total: 2,425 images).
- Test set: 46 fire videos (yielding 19,094 frames), 16 non-fire videos (6,747 frames), and 160 challenging non-fire images.
For perimeter segmentation, the dataset was extended to short infrared clips (~150 frames each) captured from aerial platforms operating above wildfires. The resulting dataset includes approximately 400,000 frames, with around 100,000 showing an active fire perimeter. Each frame is supplemented with a binary mask delineating burning or burnt regions, optimized for semantic segmentation architectures.
Subset | Modality | Volume | Purpose |
---|---|---|---|
Train/val | RGB images | 2,425 | Detection/classification |
Test | Video frames, RGB images | ~26,001 frames | Robustness evaluation |
Segmentation | IR (aerial video frames) | 400,000 frames | Perimeter segmentation |
2. Data Collection and Diversity
Dataset assembly aimed for exhaustive scenario representation with diverse backgrounds, lighting, and confounding objects. Sources include:
- Previously published datasets (Foggia’s, Sharma’s) for fire/no-fire imagery.
- Internet-acquired images (Google, Flickr) spanning variable contexts.
- Newly shot videos in complex real-world environments to capture naturally occurring and challenging fire events.
For segmentation, infrared data was preferred due to the unreliability of RGB imagery in smoky or obstructed scenes. A plausible implication is that IR prioritizes intensity changes and spatial patterns over color cues, strengthening the robustness of perimeter detection.
3. Annotation Protocols
Annotations are binary for fire/no-fire detection and pixel-wise binary masks for segmentation. Labeling:
- Detection labels were set based on the presence or absence of fire, with emphasis on challenging backgrounds (fire-like objects and distractors).
- Segmentation masks were generated under strict quality assurance supervised by fire response experts (CAL FIRE, California Air National Guard), ensuring the perimeter definitions match operational needs.
The loss function for segmentation model optimization is a differentiable Dice Similarity Coefficient:
where represents predicted mask values, ground truth, and a smoothing term.
4. Model Integration and Evaluation Strategy
The FireNet Dataset is pivotal to supervised training and robust evaluation of deep learning architectures:
- Classification: The shallow FireNet architecture (14 layers, convolutional and fully connected, softmax output) is trained with a 70/30 train/validation split using resized (64×64×3) images.
- Segmentation: A U-Net architecture with ResNet-like blocks and skip connections operates on IR frames; innovations include the use of previous frame predictions (“PrevPred”) to improve temporal coherence. A 3D ConvNet yielding higher latency was discarded in favor of pruned U-Net variants.
Performance metrics reported include accuracy, precision, recall, and F-measure for detection; F1 score and inference speed (fps) for segmentation. For example, the production segmentation model delivers 20 fps at F1=92 on a Nvidia K80 GPU.
5. Benchmarking and Comparative Analysis
Comparisons were drawn with Foggia’s dataset, which is less diverse due to high inter-frame similarity. The FireNet dataset’s amalgamation of challenging examples ensures superior generalization in real-world conditions. Classification models show slightly higher accuracy on Foggia’s dataset, attributed to its lower inherent scenario diversity. For segmentation, the large annotated IR dataset enables robust, real-time perimeter estimation, markedly reducing manual annotation labor for intelligence analysts in disaster contexts.
Dataset | Diversity | Detection Accuracy | Segmentation F1 | Inference Speed |
---|---|---|---|---|
FireNet | High | High | 92-95 | Up to 20 fps |
Foggia | Low | Slightly higher | — | — |
6. Availability and Open Science Implications
All elements of the FireNet Dataset—original media, extracted frames, labels, and trained models—are available in a structured format via the referenced GitHub repository (“github_ref”). Images are in common formats (JPEG, PNG), and video frames are organized to facilitate independent use or replication by the research community. This open access supports validation studies and encourages innovation in fire detection and situational awareness tools.
7. Impact and Research Extensions
The FireNet Dataset underpins the development and evaluation of portable fire detection (FireNet) and perimeter segmentation (FireNet U-Net) models, with applications extending from embedded IoT devices (e.g., Raspberry Pi) to disaster response operations employing real-time mapping. Its integration into KutralNet (Ayala et al., 2020) further validates the FireNet dataset as a benchmark for low-flop, high-accuracy models, demonstrating up to 71% parameter reduction without substantial accuracy loss. The diversity and realism of scenarios included render the dataset a critical resource for advancing fire recognition methodologies suitable for deployment in constrained and time-critical environments.
A plausible implication is that the continued use and extension of the FireNet Dataset will foster further improvements in real-time, automated fire detection and segmentation systems, with tangible operational benefits in safety, resource allocation, and emergency management.