Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Deep Blind Super-Resolution for Satellite Video (2401.07139v1)

Published 13 Jan 2024 in cs.CV, cs.AI, and eess.IV

Abstract: Recent efforts have witnessed remarkable progress in Satellite Video Super-Resolution (SVSR). However, most SVSR methods usually assume the degradation is fixed and known, e.g., bicubic downsampling, which makes them vulnerable in real-world scenes with multiple and unknown degradations. To alleviate this issue, blind SR has thus become a research hotspot. Nevertheless, existing approaches are mainly engaged in blur kernel estimation while losing sight of another critical aspect for VSR tasks: temporal compensation, especially compensating for blurry and smooth pixels with vital sharpness from severely degraded satellite videos. Therefore, this paper proposes a practical Blind SVSR algorithm (BSVSR) to explore more sharp cues by considering the pixel-wise blur levels in a coarse-to-fine manner. Specifically, we employed multi-scale deformable convolution to coarsely aggregate the temporal redundancy into adjacent frames by window-slid progressive fusion. Then the adjacent features are finely merged into mid-feature using deformable attention, which measures the blur levels of pixels and assigns more weights to the informative pixels, thus inspiring the representation of sharpness. Moreover, we devise a pyramid spatial transformation module to adjust the solution space of sharp mid-feature, resulting in flexible feature adaptation in multi-level domains. Quantitative and qualitative evaluations on both simulated and real-world satellite videos demonstrate that our BSVSR performs favorably against state-of-the-art non-blind and blind SR models. Code will be available at https://github.com/XY-boy/Blind-Satellite-VSR

Definition Search Book Streamline Icon: https://streamlinehq.com
References (77)
  1. Q. Zhang, Q. Yuan, Z. Li, F. Sun, and L. Zhang, “Combined deep prior with low-rank tensor SVD for thick cloud removal in multitemporal images,” ISPRS J. Photogramm. Remote Sens., vol. 177, pp. 161–173, Jul. 2021.
  2. M. Zhao, S. Li, S. Xuan, L. Kou, S. Gong, and Z. Zhou, “Satsot: A benchmark dataset for satellite video single object tracking,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–11, 2022.
  3. T. Guo, L. He, F. Luo, X. Gong, Y. Li, and L. Zhang, “Anomaly detection of hyperspectral image with hierarchical anti-noise mutual-incoherence-induced low-rank representation,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
  4. Y. Xu, L. Zhang, B. Du, and L. Zhang, “Hyperspectral anomaly detection based on machine learning: An overview,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 3351–3364, 2022.
  5. D. He and Y. Zhong, “Deep hierarchical pyramid network with high- frequency -aware differential architecture for super-resolution mapping,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2023.
  6. J. Xie, L. Fang, B. Zhang, J. Chanussot, and S. Li, “Super resolution guided deep network for land cover classification from remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–12, 2022.
  7. S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Deep learning for hyperspectral image classification: An overview,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6690–6709, 2019.
  8. T. Wang, Y. Gu, and G. Gao, “Satellite video scene classification using low-rank sparse representation two-stream networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–12, 2022.
  9. Y. Duan, F. Luo, M. Fu, Y. Niu, and X. Gong, “Classification via structure-preserved hypergraph convolution network for hyperspectral image,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–13, 2023.
  10. D. He, Q. Shi, X. Liu, Y. Zhong, G. Xia, and L. Zhang, “Generating annual high resolution land cover products for 28 metropolises in china based on a deep super-resolution mapping network using landsat imagery,” GIScience & Remote Sensing, vol. 59, no. 1, pp. 2036–2067, 2022.
  11. F. Luo, T. Zhou, J. Liu, T. Guo, X. Gong, and J. Ren, “Multiscale diff-changed feature fusion network for hyperspectral image change detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–13, 2023.
  12. B. Arad, R. Timofte, R. Yahel, N. Morag, A. Bernat, Y. Cai, J. Lin, Z. Lin, H. Wang, Y. Zhang et al., “Ntire 2022 spectral recovery challenge and data set,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 863–881.
  13. Q. Yang, Q. Yuan, L. Yue, T. Li, H. Shen, and L. Zhang, “Mapping pm2. 5 concentration at a sub-km level resolution: A dual-scale retrieval approach,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 165, pp. 140–151, 2020.
  14. K. Jiang, Z. Wang, P. Yi, T. Lu, J. Jiang, and Z. Xiong, “Dual-path deep fusion network for face image hallucination,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 1, pp. 378–391, 2020.
  15. Y. Xiao, Y. Wang, Q. Yuan, J. He, and L. Zhang, “Generating a long-term (2003- 2020) hourly 0.25° global pm2. 5 dataset via spatiotemporal downscaling of cams with deep learning (deepcams),” Science of The Total Environment, vol. 848, p. 157747, 2022.
  16. J. He, Q. Yuan, J. Li, Y. Xiao, D. Liu, H. Shen, and L. Zhang, “Spectral super-resolution meets deep learning: Achievements and challenges,” Information Fusion, p. 101812, 2023.
  17. F. Li, X. Jia, D. Fraser, and A. Lambert, “Super resolution for remote sensing images based on a universal hidden markov tree model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 1270–1278, 2010.
  18. Y. Wang, Q. Yuan, T. Li, L. Zhu, and L. Zhang, “Estimating daily full-coverage near surface o3, co, and no2 concentrations at a high spatial resolution over china based on s5p-tropomi and geos-fp,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 175, pp. 311–325, 2021.
  19. Y. Wang, Q. Yuan, L. Zhu, and L. Zhang, “Spatiotemporal estimation of hourly 2-km ground-level ozone over china based on himawari-8 using a self-adaptive geospatially local model,” Geoscience Frontiers, vol. 13, no. 1, p. 101286, 2022.
  20. Z. Li, Q. Yuan, and L. Zhang, “Geo-intelligent retrieval framework based on machine learning in the cloud environment: A case study of soil moisture retrieval,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–1, 2023.
  21. Q. Zhang, Y. Zheng, Q. Yuan, M. Song, H. Yu, and Y. Xiao, “Hyperspectral image denoising: From model-driven, data-driven, to model-data-driven,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–21, Jun. 2023.
  22. F. Wang, J. Li, Q. Yuan, and L. Zhang, “Local–global feature-aware transformer based residual network for hyperspectral image denoising,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–19, 2022.
  23. Q. Zhang, Q. Yuan, M. Song, H. Yu, and L. Zhang, “Cooperated spectral low-rankness prior and deep spatial prior for hsi unsupervised denoising,” IEEE Transactions on Image Processing, vol. 31, pp. 6356–6368, 2022.
  24. K. Jiang, Z. Wang, P. Yi, G. Wang, K. Gu, and J. Jiang, “Atmfn: Adaptive-threshold-based multi-model fusion network for compressed face hallucination,” IEEE Transactions on Multimedia, vol. 22, no. 10, pp. 2734–2747, 2019.
  25. K. Jiang, Z. Wang, P. Yi, C. Chen, Z. Wang, X. Wang, J. Jiang, and C.-W. Lin, “Rain-free and residue hand-in-hand: A progressive coupled network for real-time image deraining,” IEEE Transactions on Image Processing, vol. 30, pp. 7404–7418, 2021.
  26. H. Liu and Y. Gu, “Deep joint estimation network for satellite video super-resolution with multiple degradations,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022.
  27. X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable detr: Deformable transformers for end-to-end object detection,” in International Conference on Learning Representations, 2021.
  28. Y. Xiao, X. Su, Q. Yuan, D. Liu, H. Shen, and L. Zhang, “Satellite video super-resolution via multiscale deformable convolution alignment and temporal grouping projection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–19, 2022.
  29. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2016.
  30. J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1646–1654.
  31. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 286–301.
  32. K. Jiang, Z. Wang, P. Yi, and J. Jiang, “Hierarchical dense recursive network for image super-resolution,” Pattern Recognition, vol. 107, p. 107475, 2020.
  33. J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1833–1844.
  34. J. He, Q. Yuan, J. Li, Y. Xiao, X. Liu, and Y. Zou, “Dster: A dense spectral transformer for remote sensing spectral super-resolution,” International Journal of Applied Earth Observation and Geoinformation, vol. 109, p. 102773, 2022.
  35. A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos, “Video super-resolution with convolutional neural networks,” IEEE transactions on computational imaging, vol. 2, no. 2, pp. 109–122, 2016.
  36. J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4778–4787.
  37. M. Haris, G. Shakhnarovich, and N. Ukita, “Recurrent back-projection network for video super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3897–3906.
  38. Y. Jo, S. W. Oh, J. Kang, and S. J. Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3224–3232.
  39. Y. Tian, Y. Zhang, Y. Fu, and C. Xu, “Tdan: Temporally-deformable alignment network for video super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3360–3369.
  40. X. Wang, K. C. Chan, K. Yu, C. Dong, and C. Change Loy, “Edvr: Video restoration with enhanced deformable convolutional networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 0–0.
  41. H. Song, W. Xu, D. Liu, B. Liu, Q. Liu, and D. N. Metaxas, “Multi-stage feature fusion network for video super-resolution,” IEEE Transactions on Image Processing, vol. 30, pp. 2923–2934, 2021.
  42. J. Yu, J. Liu, L. Bo, and T. Mei, “Memory-augmented non-local attention for video super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 834–17 843.
  43. P. Yi, Z. Wang, K. Jiang, J. Jiang, T. Lu, X. Tian, and J. Ma, “Omniscient video super-resolution,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4429–4438.
  44. K. C. Chan, S. Zhou, X. Xu, and C. C. Loy, “Basicvsr++: Improving video super-resolution with enhanced propagation and alignment,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5972–5981.
  45. K. Zhang, W. Zuo, and L. Zhang, “Learning a single convolutional super-resolution network for multiple degradations,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  46. K. Zhang, L. V. Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  47. J. Gu, H. Lu, W. Zuo, and C. Dong, “Blind super-resolution with iterative kernel correction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 1604–1613.
  48. L. Wang, Y. Wang, X. Dong, Q. Xu, J. Yang, W. An, and Y. Guo, “Unsupervised degradation representation learning for blind super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 581–10 590.
  49. Y. Jo, S. W. Oh, P. Vajda, and S. J. Kim, “Tackling the ill-posedness of super-resolution through adaptive target generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 16 236–16 245.
  50. A. Bulat, J. Yang, and G. Tzimiropoulos, “To learn image super-resolution, use a gan to learn how to do image degradation first,” in Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
  51. M. Fritsche, S. Gu, and R. Timofte, “Frequency separation for real-world super-resolution,” in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 3599–3608.
  52. Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, and L. Lin, “Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018.
  53. A. Liu, Y. Liu, J. Gu, Y. Qiao, and C. Dong, “Blind image super-resolution: A survey and beyond,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 5, pp. 5461–5480, 2023.
  54. H. Wu, N. Ni, and L. Zhang, “Lightweight stepless super-resolution of remote sensing images via saliency-aware dynamic routing strategy,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
  55. K. Jiang, Z. Wang, P. Yi, J. Jiang, J. Xiao, and Y. Yao, “Deep distillation recursive network for remote sensing imagery super-resolution,” Remote Sensing, vol. 10, no. 11, p. 1700, 2018.
  56. S. Jia, Z. Wang, Q. Li, X. Jia, and M. Xu, “Multiattention generative adversarial network for remote sensing image super-resolution,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022.
  57. X. Dong, X. Sun, X. Jia, Z. Xi, L. Gao, and B. Zhang, “Remote sensing image super-resolution using novel dense-sampling networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 2, pp. 1618–1633, 2021.
  58. K. Jiang, Z. Wang, P. Yi, G. Wang, T. Lu, and J. Jiang, “Edge-enhanced gan for remote sensing image superresolution,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 8, pp. 5799–5812, 2019.
  59. Y. Xiao, X. Su, and Q. Yuan, “A recurrent refinement network for satellite video super-resolution,” in 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS.   IEEE, 2021, pp. 3865–3868.
  60. H. Liu, Y. Gu, T. Wang, and S. Li, “Satellite video super-resolution based on adaptively spatiotemporal neighbors and nonlocal similarity regularization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 12, pp. 8372–8383, 2020.
  61. Z. He, J. Li, L. Liu, D. He, and M. Xiao, “Multiframe video satellite image super-resolution via attention-based residual learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2021.
  62. Y. Xiao, Q. Yuan, J. He, Q. Zhang, J. Sun, X. Su, J. Wu, and L. Zhang, “Space-time super-resolution for satellite video: A joint framework based on multi-scale spatial-temporal transformer,” International Journal of Applied Earth Observation and Geoinformation, vol. 108, p. 102731, 2022.
  63. X. Jin, J. He, Y. Xiao, and Q. Yuan, “Learning a local-global alignment network for satellite video super-resolution,” IEEE Geoscience and Remote Sensing Letters, 2023.
  64. Y. Xiao, Q. Yuan, K. Jiang, X. Jin, J. He, L. Zhang, and C.-w. Lin, “Local-global temporal difference learning for satellite video super-resolution,” arXiv preprint arXiv:2304.04421, 2023.
  65. Y. Xiao, Q. Yuan, K. Jiang, J. He, Y. Wang, and L. Zhang, “From degrade to upgrade: Learning a self-supervised degradation guided adaptive network for blind remote sensing image super-resolution,” Information Fusion, vol. 96, pp. 297–311, 2023.
  66. H. Wu, N. Ni, S. Wang, and L. Zhang, “Blind super-resolution for remote sensing images via conditional stochastic normalizing flows,” arXiv preprint arXiv:2210.07751, 2022.
  67. Z. He, D. He, X. Li, and R. Qu, “Blind superresolution of satellite videos by ghost module-based convolutional networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–19, 2023.
  68. J. Pan, H. Bai, J. Dong, J. Zhang, and J. Tang, “Deep blind video super-resolution,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4811–4820.
  69. D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, “Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8934–8943.
  70. L. Fang, Y. Jiang, Y. Yan, J. Yue, and Y. Deng, “Hyperspectral image instance segmentation using spectral–spatial feature pyramid network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–13, 2023.
  71. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  72. W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  73. K. C. Chan, X. Wang, K. Yu, C. Dong, and C. C. Loy, “Basicvsr: The search for essential components in video super-resolution and beyond,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4947–4956.
  74. Q. Zhang, Q. Yuan, J. Li, Z. Li, H. Shen, and L. Zhang, “Thick cloud and cloud shadow removal in multitemporal imagery using progressively spatio-temporal patch group deep learning,” ISPRS J. Photogramm. Remote Sens., vol. 162, pp. 148–160, Apr. 2020.
  75. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2013.
  76. S. Bell-Kligler, A. Shocher, and M. Irani, “Blind super-resolution kernel estimation using an internal-gan,” in Advances in Neural Information Processing Systems, vol. 32, 2019.
  77. T. Isobe, S. Li, X. Jia, S. Yuan, G. Slabaugh, C. Xu, Y.-L. Li, S. Wang, and Q. Tian, “Video super-resolution with temporal group attention,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8008–8017.
Citations (26)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com