Papers
Topics
Authors
Recent
2000 character limit reached

FireScope-Bench: Wildfire Risk Benchmark

Updated 28 November 2025
  • FireScope-Bench is a large-scale, multimodal dataset and benchmark that integrates high-resolution Sentinel-2 imagery, climate normals, and expert risk rasters for wildfire prediction.
  • It supports rigorous model evaluation using metrics like MSE, MAE, SSIM, ROC AUC, and IoU in both in-distribution and out-of-distribution settings.
  • The framework’s chain-of-thought vision-language model enables interpretable spatial predictions, enhancing generalization and causal reasoning in risk assessment.

FireScope-Bench is a large-scale, multimodal dataset and benchmark developed for high-resolution, reasoning-intensive wildfire risk prediction. It combines Sentinel-2 satellite imagery, coarse-resolution climate normals, and expert-defined continuous wildfire risk rasters across the continental United States, supplemented by actual wildfire events and matched control tiles from Europe to enable systematic evaluation of model generalization across geographic regions. FireScope-Bench supports the FireScope framework, which incorporates a chain-of-thought (CoT) Vision-LLM (VLM) Oracle and a lightweight vision encoder–decoder, facilitating interpretable, causally grounded spatial prediction at the raster level (Markov et al., 21 Nov 2025).

1. Dataset Composition and Preprocessing

FireScope-Bench is constructed to maximize spatial, temporal, and multimodal diversity:

  • Spatial Coverage and Tiling:
    • United States: The benchmark covers 5.7 million km², divided into 50,000 geographically stratified, non-overlapping tiles (341×341 pixels, ≈100 km² each, 30 m/pixel). The splits are 40,000 training, 4,000 validation, and 4,000 test tiles.
    • Europe: 3,000 tiles recording actual wildfire events (2018–2025) and 2,000 control tiles sampled per country, used exclusively for out-of-distribution testing.
  • Sentinel-2 Imagery:
    • Imagery is provided at 10 m/pixel, Level-2A (bottom-of-atmosphere reflectance).
    • US risk rasters and European controls reflect the summer (June 22–September 22) of 2021, while European fire events use imagery from the summer preceding each event.
    • Preprocessing includes cloud masking and construction of a pixel-wise median mosaic, followed by band-wise z-score normalization:

    M~i,jb=Mi,jbμbσb\tilde M_{i,j}^{b} = \frac{M_{i,j}^{b} - \mu_{b}}{\sigma_{b}}

    where Mi,jbM_{i,j}^{b} is the median seasonal reflectance for pixel (i,j)(i, j), band bb.

  • Climate Normals (NASA POWER):

    • Variables: near-surface temperature, precipitation, relative humidity, wind speed, wind direction, aggregated monthly (12 months), yielding a CR60C \in \mathbb{R}^{60} vector per tile.
    • Spatial resolution is 50 km; each climate vector is interpolated to tile centroid.
    • Climate features are independently standardized:

    C~k=Ckμkσk\tilde{C}_k = \frac{C_k - \mu_k}{\sigma_k}

  • Expert-Defined Risk Raster:

    • Source: Wildfire Risk to Communities project, providing continuous values [0,1]\in [0,1] for expected consequence to built structures.
    • The raster has a native resolution of 30 m/pixel and is quintile-transformed for even distribution of risk values.
    • No smoothing is applied, preserving high-frequency spatial detail.
  • Input Tensor Construction:

Sentinel imagery bands (normalized) and broadcast climate vectors are concatenated per pixel:

Xi,j=[M~i,j1,,M~i,jB,  C~1,,C~60]X_{i, j} = [\tilde{M}_{i, j}^{1}, \dots, \tilde{M}_{i, j}^{B},\; \tilde{C}_{1}, \dots, \tilde{C}_{60}]

Resulting in XR341×341×(B+60)X \in \mathbb{R}^{341 \times 341 \times (B + 60)}.

2. Benchmark Protocol and Evaluation

Benchmarking in FireScope-Bench is structured to assess both in-distribution performance (USA) and cross-continental generalization (Europe):

  • Data Splits:
    • USA: 40,000 train, 4,000 val, 4,000 test.
    • Quick-experiment subset: 1,000 train, 100 val, 100 test tiles.
    • Europe: 3,000 wildfire event tiles, 2,000 control tiles.
  • Evaluation Metrics:
    • In-Distribution (Continuous Raster):
    • Mean Squared Error (MSE): MSE=1HWi,j(r^i,jri,j)2\mathrm{MSE} = \frac{1}{HW} \sum_{i,j} (\hat{r}_{i,j} - r_{i,j})^2
    • Mean Absolute Error (MAE): MAE=1HWi,jr^i,jri,j\mathrm{MAE} = \frac{1}{HW} \sum_{i,j} |\hat{r}_{i,j} - r_{i,j}|
    • Structural Similarity Index (SSIM) [Wang et al. 2004]: computed with 11×1111 \times 11 Gaussian window.
    • Out-of-Distribution (Wildfire Events):
    • Brier Score: 1Ni(piyi)2\frac{1}{N}\sum_{i}(p_i - y_i)^2
    • ROC AUC: AUC=01TPR(FPR1(t))dt\mathrm{AUC} = \int_0^1 \mathrm{TPR}(\mathrm{FPR}^{-1}(t))\,dt
    • Expected Calibration Error (ECE), using 15 bins.
    • Intersection over Union (IoU) for burned-area masks.
    • Ordinal Area-level Risk: Quadratic Weighted Kappa (QWK) over 10 bins.
  • Baselines:

Each vision backbone — U-Net, SegFormer MiT-B5, AlphaEarth embedding — is tested under four conditioning regimes: 1. image only, 2. image + raw climate vector, 3. image + Oracle scalar (VLM pre-CoT), 4. image + CoT Oracle.

3. FireScope Model Framework and Chain-of-Thought Oracle

Built on FireScope-Bench, the FireScope model pioneers explicit reasoning in spatial prediction:

  • Model Architecture:

    z=CoT-Oracle(X),s=score(z)z = \mathrm{CoT\text{-}Oracle}(X), \qquad s = \mathrm{score}(z) - Stage 2: Vision Encoder–Decoder GG, conditioned via FiLM blocks on ss, generates risk raster R^\hat{R}:

    R^=G(X,s)\hat{R} = G(X, s)

  • Oracle Fine-Tuning (GRPO):

    • Reward combines classification (RaccR_{\mathrm{acc}}) and output formatting (RfmtR_{\mathrm{fmt}}):

    R=0.9Racc+0.1RfmtR = 0.9\,R_{\mathrm{acc}} + 0.1\,R_{\mathrm{fmt}} - The GRPO objective uses PPO-style clipping and a KL-regularizer:

    JGRPO(θ)=E[1nimin(diA^i,clip(di,1ϵ,1+ϵ)A^i)βDKL(πθπref)]\mathcal{J}_{\mathrm{GRPO}}(\theta) = \mathbb{E}\left[\frac{1}{n} \sum_{i} \min(d_i\hat{A}_i, \mathrm{clip}(d_i,1-\epsilon,1+\epsilon)\hat{A}_i) - \beta\,D_{\mathrm{KL}}(\pi_\theta \| \pi_{\mathrm{ref}})\right]

    where di=πθ(oiX)πref(oiX)d_i = \frac{\pi_\theta(o_i | X)}{\pi_{\mathrm{ref}}(o_i | X)}.

  • Encoder–Decoder Training:

Loss function combines Smooth-L1, SSIM, and gradient matching:

L(y,y^)=Smooth-L1(y,y^)+0.5(1SSIM(y~,y^~))+0.2yy^1\mathcal{L}(y, \hat{y}) = \text{Smooth-L}_1(y, \hat{y}) + 0.5(1-\mathrm{SSIM}(\tilde{y},\tilde{\hat{y}})) + 0.2\,\|\nabla y-\nabla \hat{y}\|_1

FiLM conditioning applies per-block, using ss for affine transformations.

  • Training Schedule:
    • SegFormer: learning rate 1e51\text{e}^{-5}, 500 epochs
    • U-Net/AlphaEarth: learning rate 1e31\text{e}^{-3}, 1000 epochs.

4. Generalization and Model Interpretability

FireScope-Bench facilitates robust assessment of spatial generalization and interpretability:

  • Cross-Continental Generalization:
    • SegFormer conditioned on CoT Oracle achieves Brier score ≈ 0.205 vs. 0.222 for image-only, ROC AUC ≈ 0.727 vs. 0.705, and [email protected] ≈ 0.184 vs. 0.179 when trained in the USA and tested in Europe. Typical pixel-level ROC AUC gains are ≈ 0.04, IoU gains ≈ 0.01. In-distribution accuracy remains within ±5% of best baseline values (MSE/SSIM/MAE).
  • Reasoning Trace Examples:
    • Oracle traces follow causal logic, identifying vegetation density, dryness, humidity, wind, and topography and delivering a stepwise conclusion (e.g., “FINAL ANSWER: 7”).
    • Conditioning vision models on the CoT Oracle results in pixel-level risk predictions that align with expert expectations, notably on slopes and ridges.
  • Interpretability Metrics:
    • Fidelity: Artificial CoT perturbations shift raster predictions ≈ 33% toward the manipulated extreme.
    • Consistency: Paraphrased CoTs yield highly similar outputs (consistency ≈ 0.91).
    • Expert Study: Domain experts using CoT summaries reach QWK of 0.33 and 0.11; GPT-5 synthesized CoTs reach QWK up to 0.59, indicating meaningful, improvable causal conveyance.

5. Impact and Applications

FireScope-Bench enables systematic investigation of reasoning-driven spatial modeling for wildfire risk, supporting cross-continental generalization studies:

  • Comparative Evaluation:

Provides rigorous baselines and metrics for multimodal raster prediction, benchmarking both standard and reasoning-enhanced approaches in variable conditions.

  • Generalization:

Demonstrates that explicit language-based causal reasoning in VLMs provides a powerful prior, improving out-of-distribution performance without sacrificing spatial fidelity.

  • Interpretability:

Facilitates quantification and analysis of reasoning traces, supporting evaluation of model fidelity and consistency with expert domain knowledge.

  • Research Utility:

FireScope-Bench and the FireScope framework present the first empirical evidence that language-based reasoning enhances generalization for visual generation tasks in wildfire risk modeling and establish a foundation for developing interpretable, robust spatial models that integrate multimodal evidence (Markov et al., 21 Nov 2025).

6. Future Directions and Research Significance

The release of FireScope-Bench and the FireScope framework marks a foundational step for reasoning-driven spatial modeling:

  • Extensibility:

The dataset and benchmark permit expansion to further climatic regions and hazard domains, supporting generalized research in causal, multimodal risk evaluation.

  • Methodological Advancement:

The paradigm of chain-of-thought VLM guidance for visual generation is applicable in broader geospatial context, including climate resilience, disaster prediction, and infrastructure planning.

  • Interpretability Studies:

Results motivate future investigation of CoT design, expert feedback loops, and causal trace optimization to further close the gap between human-expert and model reasoning.

This suggests that FireScope-Bench will serve as a robust testbed and foundation for developing, rigorously evaluating, and interpreting generalizable, multimodal approaches to wildfire and other geospatial risk prediction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to FireScope-Bench.