Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RSDehamba: Lightweight Vision Mamba for Remote Sensing Satellite Image Dehazing (2405.10030v1)

Published 16 May 2024 in cs.CV

Abstract: Remote sensing image dehazing (RSID) aims to remove nonuniform and physically irregular haze factors for high-quality image restoration. The emergence of CNNs and Transformers has taken extraordinary strides in the RSID arena. However, these methods often struggle to demonstrate the balance of adequate long-range dependency modeling and maintaining computational efficiency. To this end, we propose the first lightweight network on the mamba-based model called RSDhamba in the field of RSID. Greatly inspired by the recent rise of Selective State Space Model (SSM) for its superior performance in modeling linear complexity and remote dependencies, our designed RSDehamba integrates the SSM framework into the U-Net architecture. Specifically, we propose the Vision Dehamba Block (VDB) as the core component of the overall network, which utilizes the linear complexity of SSM to achieve the capability of global context encoding. Simultaneously, the Direction-aware Scan Module (DSM) is designed to dynamically aggregate feature exchanges over different directional domains to effectively enhance the flexibility of sensing the spatially varying distribution of haze. In this way, our RSDhamba fully demonstrates the superiority of spatial distance capture dependencies and channel information exchange for better extraction of haze features. Extensive experimental results on widely used benchmarks validate the surpassing performance of our RSDehamba against existing state-of-the-art methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. X. Cong, J. Gui, K.-C. Miao, J. Zhang, B. Wang, and P. Chen, “Discrete haze level dehazing network,” in ACMMM, 2020, pp. 1828–1836.
  2. J. Gui, X. Cong, Y. Cao, W. Ren, J. Zhang, J. Zhang, and D. Tao, “A comprehensive survey on image dehazing based on deep learning,” in IJCAI, 2021.
  3. X. Chen, Z. Fan, P. Li, L. Dai, C. Kong, Z. Zheng, Y. Huang, and Y. Li, “Unpaired deep image dehazing using contrastive disentanglement learning,” in ECCV, 2022, pp. 632–648.
  4. X. Chen, Y. Li, L. Dai, and C. Kong, “Hybrid high-resolution learning for single remote sensing satellite image dehazing,” IEEE GRSL, vol. 19, pp. 1–5, 2021.
  5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE TPAMI, vol. 33, no. 12, pp. 2341–2353, 2010.
  6. J. Long, Z. Shi, W. Tang, and C. Zhang, “Single remote sensing image dehazing,” IEEE GRSL, vol. 11, no. 1, pp. 59–63, 2013.
  7. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in ICCV, 2017, pp. 4770–4778.
  8. X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in AAAI, vol. 34, no. 07, 2020, pp. 11 908–11 915.
  9. H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, and M.-H. Yang, “Multi-scale boosted dehazing network with dense feature fusion,” in CVPR, 2020, pp. 2157–2167.
  10. Y. Li and X. Chen, “A coarse-to-fine two-stage attentive network for haze removal of remote sensing images,” IEEE GRSL, vol. 18, no. 10, pp. 1751–1755, 2020.
  11. C.-L. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing transformer with transmission-aware 3d position embedding,” in CVPR, 2022, pp. 5812–5820.
  12. Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE TIP, vol. 32, pp. 1927–1941, 2023.
  13. A. Kulkarni and S. Murala, “Aerial image dehazing with attentive deformable transformers,” in WACV, 2023, pp. 6305–6314.
  14. T. Song, S. Fan, P. Li, J. Jin, G. Jin, and L. Fan, “Learning an effective transformer for remote sensing satellite image dehazing,” IEEE GRSL, 2023.
  15. X. He, T. Jia, and J. Li, “Learning degradation-aware visual prompt for maritime image restoration under adverse weather conditions,” Frontiers in Marine Science, vol. 11, p. 1382147, 2024.
  16. A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” arXiv preprint arXiv:2312.00752, 2023.
  17. H. Guo, J. Li, T. Dai, Z. Ouyang, X. Ren, and S.-T. Xia, “Mambair: A simple baseline for image restoration with state-space model,” arXiv preprint arXiv:2402.15648, 2024.
  18. J. Ma, F. Li, and B. Wang, “U-mamba: Enhancing long-range dependency for biomedical image segmentation,” arXiv preprint arXiv:2401.04722, 2024.
  19. H. Ullah, K. Muhammad, M. Irfan, S. Anwar, M. Sajjad, A. S. Imran, and V. H. C. de Albuquerque, “Light-dehazenet: a novel lightweight cnn architecture for single image dehazing,” IEEE TIP, vol. 30, pp. 8968–8982, 2021.
  20. D. Chen, M. He, Q. Fan, J. Liao, L. Zhang, D. Hou, L. Yuan, and G. Hua, “Gated context aggregation network for image dehazing and deraining,” in WACV, 2019, pp. 1375–1383.
  21. X. Liu, Y. Ma, Z. Shi, and J. Chen, “Griddehazenet: Attention-based multi-scale network for image dehazing,” in ICCV, 2019, pp. 7314–7323.
  22. S. Li, Y. Zhou, and W. Xiang, “M2scn: Multi-model self-correcting network for satellite remote sensing single-image dehazing,” IEEE GRSL, vol. 20, pp. 1–5, 2022.
  23. S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in CVPR, 2022, pp. 5728–5739.
  24. X. Chen, H. Li, M. Li, and J. Pan, “Learning a sparse transformer network for effective image deraining,” in CVPR, 2023, pp. 5896–5905.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Huiling Zhou (10 papers)
  2. Xianhao Wu (2 papers)
  3. Hongming Chen (20 papers)
  4. Xiang Chen (343 papers)
  5. Xin He (135 papers)
Citations (3)