Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network (1803.04271v2)

Published 12 Mar 2018 in cs.CV and cs.LG

Abstract: The Sentinel-2 satellite mission delivers multi-spectral imagery with 13 spectral bands, acquired at three different spatial resolutions. The aim of this research is to super-resolve the lower-resolution (20 m and 60 m Ground Sampling Distance - GSD) bands to 10 m GSD, so as to obtain a complete data cube at the maximal sensor resolution. We employ a state-of-the-art convolutional neural network (CNN) to perform end-to-end upsampling, which is trained with data at lower resolution, i.e., from 40->20 m, respectively 360->60 m GSD. In this way, one has access to a virtually infinite amount of training data, by downsampling real Sentinel-2 images. We use data sampled globally over a wide range of geographical locations, to obtain a network that generalises across different climate zones and land-cover types, and can super-resolve arbitrary Sentinel-2 images without the need of retraining. In quantitative evaluations (at lower scale, where ground truth is available), our network, which we call DSen2, outperforms the best competing approach by almost 50% in RMSE, while better preserving the spectral characteristics. It also delivers visually convincing results at the full 10 m GSD. The code is available at https://github.com/lanha/DSen2

Citations (259)

Summary

  • The paper’s primary contribution is developing DSen2, a deep learning method that improves spatial resolution and reduces RMSE by nearly 50%.
  • It employs state-of-the-art CNN architectures for end-to-end upsampling and effective fusion of multispectral data.
  • The model’s global training on diversified downsampled data ensures robust performance across varied geographical conditions.

Super-Resolution of Sentinel-2 Images Through Deep Learning

This paper presents a novel method for super-resolving satellite imagery from the Sentinel-2 mission using convolutional neural networks (CNNs), with the objective of enhancing the spatial resolution of the 20m and 60m Ground Sampling Distance (GSD) bands to a uniform 10m GSD. The approach, termed DSen2, leverages the potential of deep learning to achieve significant improvements over existing methods in terms of accuracy while maintaining computational efficiency.

The method utilizes state-of-the-art CNN architectures to perform end-to-end upsampling. Training the network involves downsampling real Sentinel-2 images to generate a virtually infinite amount of training data. This is done by reducing higher-resolution data to simulate lower resolution and then learning the mapping back to higher resolution. The training data are globally sampled to ensure that the model generalizes well across different geographical settings and climate conditions, making it capable of super-resolving any Sentinel-2 image without retraining.

Quantitative evaluation shows that the DSen2 network outperforms traditional approaches substantially, reducing the Root Mean Square Error (RMSE) by almost 50% in comparison to the best competing methods, while also better preserving spectral characteristics. This demonstrates the superiority of leveraging CNN architectures for multispectral and multiresolution data fusion tasks. The model ensures coherence between spectral bands as well as with natural high-resolution input data. Furthermore, qualitative assessments reinforce the quantitative findings, with visibly better sharpness and detail preservation at the target 10m resolution.

The results indicated are significant for both practical applications and theoretical advancements. Practically, the ability to super-resolve images to a high level of detail without loss of spectral integrity has implications for better land cover mapping, environmental monitoring, and resource management applications, where precise spatial information is crucial. Theoretically, this development could inspire future work on integrating deep learning techniques with remote sensing applications, dovetailing with expansive remote sensing databases.

Looking to the future, the methodology could be adapted for newer sensors or further improved by integrating deeper or more complex neural networks, as computational resources allow. This work may also inspire similar methodologies in broader geospatial data processing and enhancement tasks where high-resolution data are limited or costly.

The paper provides a marked advancement in the potential for large-scale analysis of high-resolution satellite imagery, which is increasingly essential for monitoring and understanding dynamic Earth systems in near real-time. The open-source release of the software and pre-trained models is a significant contribution to the community, encouraging further development and application.

Overall, the paper effectively demonstrates the capability of deep learning models to perform super-resolution on multispectral satellite imagery and sets a benchmark for future research in this field. The robustness and generalizability of the proposed method make it a valuable tool for a wide range of remote sensing applications.

Github Logo Streamline Icon: https://streamlinehq.com