GOES LST Fields: Reconstruction & Deep Fusion
- GOES LST fields are spatiotemporal temperature maps derived from GOES-R ABI that support environmental process modeling and hazard monitoring.
- They utilize advanced reconstruction techniques including spatial interpolation, temporal modeling, and spatiotemporal fusion to mitigate data gaps and resolution trade-offs.
- Integration of deep learning methods and multi-sensor data fusion enhances spatial details and improves validation metrics like RMSE and SSIM for operational use.
The GOES (Geostationary Operational Environmental Satellite) Land Surface Temperature (LST) fields are spatiotemporally resolved products derived primarily from the Advanced Baseline Imager (ABI) instruments on the GOES-R series. These fields constitute a foundational dataset for environmental process modeling, hazard monitoring, and climate diagnostics, but present technical challenges due to inherent trade-offs between spatial and temporal resolution, missing data, and atmospheric obscuration. Modern research leverages statistical reconstruction, spatiotemporal fusion, and deep learning–based superresolution to generate spatially continuous, high-resolution, high-frequency LST fields suitable for both scientific and operational use (Bouaziz et al., 21 Dec 2024, Dai et al., 30 Sep 2024, Wu et al., 2019).
1. Physical and Observational Basis
GOES ABI retrieves surface-emitted thermal radiance at multiple infrared wavelengths, providing baseline LST fields at 2–4 km spatial resolution and ~15 min temporal resolution for a fixed hemispheric domain. The operational LST product is derived using physical radiative transfer models (split-window approaches), with explicit corrections for surface emissivity and atmospheric transmission. However, persistent challenges include:
- Data loss from cloud cover, sun glint, or instrument gaps
- Systematic bias as a function of view zenith angle and atmospheric conditions
- Coarse spatial scale relative to key surface heterogeneity (urban/rural, riparian zones)
Complementary sensors (MODIS, VIIRS, Landsat, Himawari) provide higher spatial or spectral fidelity but at reduced temporal frequency, motivating data fusion.
2. Methods for Missing Data Reconstruction
Spatially continuous LST reconstruction within GOES fields is essential to mitigate the effects of cloudy pixels, instrument noise, or scanning stripes. Methodologies are classified as spatial, temporal, and spatiotemporal:
- Spatial Interpolation:
- Inverse Distance Weighting (IDW):
Simple, effective in homogeneous regions; can oversmooth features and is sensitive to the choice of and search radius . - Spline Interpolation: Thin-plate splines enforce smooth (twice differentiable) surfaces, solving a linear system for interpolation coefficients. - Ordinary Kriging: Employs an empirical variogram for spatial autocorrelation and produces minimum-variance linear unbiased estimates; model fitting and large-scale computation are limiting factors near GOES disk edges due to non-stationary viewing geometry.
Temporal Reconstruction:
- Auto-regressive Models: Fill missing times using past values, fitting
- .
- Harmonic or Fourier Analysis: Captures regular diurnal/seasonal cycles in clear-sky LST.
- Physically Inspired Cycle Models: Fit Gaussian or logistic curves to daily LST cycles.
- Spatiotemporal Methods:
- Sequential (2-step): Apply temporal filling, then spatial; effective but not fully optimal.
- Spatio-Temporal Kriging: Models LST as a separable space-time Gaussian process, estimating covariance from empirical variograms in both space and time.
- Cloudy Pixel Imputation:
- PMW-based: Incorporate passive-microwave (e.g., AMSR2) skin temperatures via radiative-transfer or regression bridging, accounting for coarse spatial support and emissivity.
- SEB-based: Solve surface energy balance equations using clear-sky neighbors, requiring auxiliary meteorological fields for net radiation and fluxes.
3. Spatiotemporal Fusion and Superresolution Techniques
The spatiotemporal fusion problem formally seeks to estimate high-spatial, high-temporal LST fields using multi-sensor data:
- : low-spatial, high-temporal (e.g., GOES, MODIS)
- : high-spatial, low-temporal (e.g., Landsat, VIIRS)
Let , . Prediction is:
with objectives for spatial fidelity, temporal consistency, and gap filling:
Key classes of STF techniques:
- Weighted Function (STARFM/ESTARFM):
Weighted local mixing leveraging spatial kernels and spectral similarity. For each fine-scale location , the predicted value at is a locally weighted sum of coarse/fine differences adjusted at neighboring coarse centroids, with weights parameterized by spatial and spectral distance. Tuning parameters is critical for performance.
- Unmixing-Based Approaches:
Model each coarse pixel as a mixture of temporally-evolving endmembers (e.g., land cover types) with fractional abundances determined at the fine spatial scale.
- Hybrid Strategies:
Combine statistical and physically-based approaches, such as BLEST, which blends STARFM outputs with unmixing predictions via optimized weighting.
4. Deep Learning Models for GOES LST Enhancement
Recent advances leverage deep learning for both fusion and downscaling, capturing complex nonlinear relationships among multisensor fields, spatial context, atmospheric state, and auxiliary predictors.
- Architectural Taxonomy (Bouaziz et al., 21 Dec 2024):
- Encoder–Decoder (CNN/AEs): Separate spatial and temporal encoders fused in latent space.
- GAN-Based: Employ adversarial training; generators upsample and hallucinate spatial structure.
- Attention-Driven (Transformers): Model sequence dependencies using spatial and temporal tokens; cross-attention fuses information for fine-scale LST prediction.
- Hybrid RNNs (Conv-LSTM): Capture evolution across time with convolutional recurrence.
- Example Models:
- EDCSTFN (AE-based): Two parallel encoders for spatial and temporal streams, with feature, content, and perceptual losses; preserves urban/river structure when fusing GOES with high-res reference.
- MLFF-GAN (GAN-based): Multi-level, U-Net-like convolution, adaptive normalization, and attention; robust in generalization, but can introduce minor edge artifacts.
- MoCoLSK (Dai et al., 30 Sep 2024): Adds modality-conditioned selective-kernel blocks and dynamic convolutional weighting; robust to spatial non-stationarity and complex land cover.
- Training and Evaluation:
Composite loss functions include MSE (content), SSIM (perceptual), feature, and adversarial losses. State-of-the-art architectures report RMSE –$0.7$ K, PSNR dB, SSIM for – superresolution against held-out test tiles with auxiliary guidance.
| Method | RMSE (K) | SSIM |
|---|---|---|
| Bicubic | 1.03 | 0.900 |
| SUFT | 0.72 | 0.960 |
| MoCoLSK | 0.56 | 0.972 |
- Adapting to GOES:
- Inputs: 2 km GOES LST (TIR), upsampled; multispectral guidance from VIS/NIR/SWIR ABI bands and static variables (DEM, NDVI).
- Procedure: Prepare multi-channel input tensors, tile and normalize, train with L1 (pixel), multi-scale, and perceptual losses using AdamW.
- Scripts and pre-trained networks are available for model definition, training, validation, and inference.
5. Practical Data Workflows and Open Infrastructure
Practical GOES LST field generation comprises:
- Data Acquisition and Preprocessing:
- Download GOES ABI radiances, apply split-window TIR retrievals to derive LST at 2 km/15 min.
- Mask clouds/reflections, angle-correct for view-dependent bias.
- Reproject and collocate with MODIS/Landsat/VIIRS for fusion, or stack guidance layers for superresolution.
- Reconstruction:
- Apply spatial or spatiotemporal interpolation to remove instrument and geolocation gaps.
- Fill cloudy pixels using PMW- or SEB-based approaches, where auxiliary microwave or reanalysis inputs are available.
- Fusion and Superresolution:
- Align high-res references temporally, input to selected STF model (weighted function, hybrid, or deep network).
- For deep-learning, tile fields, normalize, and train with SOTA architectures.
- Validation:
- Employ cross-validation (simulated gap, leave-one-out), compare with ground networks (SURFRAD, FLUXNET).
- Metrics: RMSE, MAE, SSIM, PSNR, correlation coefficient.
- Open-Source Ecosystem:
- GrokLST (PyTorch) for LST downscaling networks and pipelines (Dai et al., 30 Sep 2024).
- Benchmark datasets of MODIS–Landsat pairs for training deep STF models (Bouaziz et al., 21 Dec 2024).
6. Evaluation Metrics and Benchmarking
Evaluation is standardized with clear, quantitative metrics:
- RMSE: Root mean square error.
- MAE: Mean absolute error.
- SSIM: Structural similarity index.
- PSNR: Peak signal-to-noise ratio.
- SAM: Spectral angle mapper.
- CC: Correlation coefficient.
- ERGAS: Relative global error in synthesis.
Test cases indicate that deep-learning models (notably MoCoLSK and EDCSTFN) consistently outperform classical fusion and interpolation, with improved spatial structure preservation (notably for urban/river boundaries). RMSE for leading methods is on the order of 0.5–0.7 K; bias and variance are lower across seasons and land covers compared to weighted function and unmixing schemes.
7. Ongoing Challenges and Research Directions
Several unsolved technical challenges persist:
- Cloud/Gap Robustness: Multi-objective training on masked input to improve inpainting/generalization.
- Bias Correction: Explicit correction for sensor, atmospheric, and emissivity differences, e.g., sensor-bias CNNs.
- Domain Adaptation: Transfer learning and few-shot adaptation for regional, seasonal, or sensor-specific generalizability.
- Enhancing Resolution: Inclusion of Sentinel-2 or PlanetScope data to exceed Landsat’s 30 m spatial limit.
- Hybrid Model Exploration: Potential for graph convolutional networks for irregular spatial sampling; joint vision–text fusion using LLMs for semantic regularization.
- Validation: Need for denser in situ LST networks and improved comparison protocols for cross-sensor harmonization.
A plausible implication is that the convergence of multi-modal guidance, dynamic receptive field selection, and physical constraint–aware fusion will underpin future GOES LST field generation frameworks.
In summary, GOES LST fields are increasing in spatial and temporal fidelity through integration of statistical reconstruction and advanced deep learning methodologies. Modern practice leverages multi-source, multi-modal fusion, physically aware loss, and open-source toolkits to deliver products suitable for both operational monitoring and fine-scale environmental research (Bouaziz et al., 21 Dec 2024, Dai et al., 30 Sep 2024, Wu et al., 2019).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free