Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SSUMamba: Spatial-Spectral Selective State Space Model for Hyperspectral Image Denoising (2405.01726v7)

Published 2 May 2024 in eess.IV, cs.CV, and cs.LG

Abstract: Denoising is a crucial preprocessing step for hyperspectral images (HSIs) due to noise arising from intra-imaging mechanisms and environmental factors. Long-range spatial-spectral correlation modeling is beneficial for HSI denoising but often comes with high computational complexity. Based on the state space model (SSM), Mamba is known for its remarkable long-range dependency modeling capabilities and computational efficiency. Building on this, we introduce a memory-efficient spatial-spectral UMamba (SSUMamba) for HSI denoising, with the spatial-spectral continuous scan (SSCS) Mamba being the core component. SSCS Mamba alternates the row, column, and band in six different orders to generate the sequence and uses the bidirectional SSM to exploit long-range spatial-spectral dependencies. In each order, the images are rearranged between adjacent scans to ensure spatial-spectral continuity. Additionally, 3D convolutions are embedded into the SSCS Mamba to enhance local spatial-spectral modeling. Experiments demonstrate that SSUMamba achieves superior denoising results with lower memory consumption per batch compared to transformer-based methods. The source code is available at https://github.com/lronkitty/SSUMamba.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. B. Thai and G. Healey, “Invariant subpixel material detection in hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens., vol. 40, no. 3, pp. 599–608, 2002.
  2. X. Yang, B. Tu, Q. Li, J. Li, and A. Plaza, “Graph evolution-based vertex extraction for hyperspectral anomaly detection,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–15, 2023.
  3. L. Gao, X. Sun, X. Sun, L. Zhuang, Q. Du, and B. Zhang, “Hyperspectral anomaly detection based on chessboard topology,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–16, 2023.
  4. W. Dong, J. Zhao, J. Qu, S. Xiao, N. Li, S. Hou, and Y. Li, “Abundance matrix correlation analysis network based on hierarchical multihead self-cross-hybrid attention for hyperspectral change detection,” IEEE Trans Geosci Remote Sens, vol. 61, pp. 1–13, 2023.
  5. F. Xiong, J. Zhou, and Y. Qian, “Material based object tracking in hyperspectral videos,” IEEE Trans. Image Process., vol. 29, pp. 3719–3733, 2020.
  6. Z. Li, F. Xiong, J. Zhou, J. Lu, and Y. Qian, “Learning a deep ensemble network with band importance for hyperspectral object tracking,” IEEE Trans. Image Process., vol. 32, pp. 2901–2914, 2023.
  7. F. Luo, T. Zhou, J. Liu, T. Guo, X. Gong, and J. Ren, “Multiscale diff-changed feature fusion network for hyperspectral image change detection,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–13, 2023.
  8. N. Li, S. Jiang, J. Xue, S. Ye, and S. Jia, “Texture-aware self-attention model for hyperspectral tree species classification,” IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1–15, 2024.
  9. M. Li, Y. Fu, and Y. Zhang, “Spatial-spectral transformer for hyperspectral image denoising,” in Proc. AAAI Conf. Artif. Intell. (AAAI), 2023.
  10. G. Fu, F. Xiong, J. Lu, J. Zhou, J. Zhou, and Y. Qian, “Hyperspectral image denoising via spatial–spectral recurrent transformer,” IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1–14, 2024.
  11. K. Wei, Y. Fu, and H. Huang, “3-D quasi-recurrent neural network for hyperspectral image denoising,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 363–375, 2021.
  12. X. Fu, Y. Guo, M. Xu, and S. Jia, “Hyperspectral image denoising via robust subspace estimation and group sparsity constraint,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–16, 2023.
  13. L. Pang, W. Gu, and X. Cao, “TRQ3DNet: A 3D quasi-recurrent and transformer based network for hyperspectral image denoising,” Remote Sensing, vol. 14, no. 18, 2022.
  14. A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” arXiv preprint arXiv:2312.00752, 2023.
  15. A. Gu, K. Goel, and C. Re, “Efficiently modeling long sequences with structured state spaces,” in International Conference on Learning Representations (ICLR), 2022.
  16. Y. Liu, Y. Tian, Y. Zhao, H. Yu, L. Xie, Y. Wang, Q. Ye, and Y. Liu, “Vmamba: Visual state space model,” arXiv preprint arXiv:2401.10166, 2024.
  17. K. Li, X. Li, Y. Wang, Y. He, Y. Wang, L. Wang, and Y. Qiao, “Videomamba: State space model for efficient video understanding,” arXiv preprint arXiv:2403.06977, 2024.
  18. N. Liu, W. Li, Y. Wang, R. Tao, Q. Du, and J. Chanussot, “A survey on hyperspectral image restoration: From the view of low-rank tensor approximation,” Science China Information Sciences, vol. 66, no. 4, p. 140302, 2023.
  19. H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 8, pp. 4729–4743, 2014.
  20. M. Ye, Y. Qian, and J. Zhou, “Multitask sparse nonnegative matrix factorization for joint spectral–spatial hyperspectral imagery denoising,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2621–2639, 2014.
  21. F. Xiong, J. Zhou, and Y. Qian, “Hyperspectral restoration via L0subscript𝐿0{L}_{0}italic_L start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT gradient regularized low-rank tensor factorization,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 12, pp. 10 410–10 425, 2019.
  22. Z. Zha, B. Wen, X. Yuan, J. Zhang, J. Zhou, Y. Lu, and C. Zhu, “Nonlocal structured sparsity regularization modeling for hyperspectral image denoising,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–16, 2023.
  23. X. Su, Z. Zhang, and F. Yang, “Fast hyperspectral image denoising and destriping method based on graph laplacian regularization,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–14, 2023.
  24. M. Maggioni, V. Katkovnik, K. Egiazarian, and A. Foi, “Nonlocal transform-domain filter for volumetric data denoising and reconstruction,” IEEE Trans. Image Process., vol. 22, no. 1, pp. 119–133, 2012.
  25. Y. Peng, D. Meng, Z. Xu, C. Gao, Y. Yang, and B. Zhang, “Decomposable nonlocal tensor dictionary learning for multispectral image denoising,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2014, pp. 2949–2956.
  26. Y. Chang, L. Yan, and S. Zhong, “Hyper-Laplacian regularized unidirectional low-rank tensor recovery for multispectral image denoising,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5901–5909.
  27. W. He, Q. Yao, C. Li, N. Yokoya, Q. Zhao, H. Zhang, and L. Zhang, “Non-local meets global: An iterative paradigm for hyperspectral image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 2089–2107, 2022.
  28. L. Zhuang and J. M. Bioucas-Dias, “Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations,” IEEE J. Sel. Topics Appl. Earth Observations Remote Sens., vol. 11, no. 3, pp. 730–742, 2018.
  29. Y. Chen, W. Cao, L. Pang, J. Peng, and X. Cao, “Hyperspectral image denoising via texture-preserved total variation regularizer,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–14, 2023.
  30. J. Peng, Q. Xie, Q. Zhao, Y. Wang, L. Yee, and D. Meng, “Enhanced 3DTV regularization and its applications on HSI denoising and compressed sensing,” IEEE Trans. Image Process., vol. 29, pp. 7889–7903, 2020.
  31. H. Zhang, T.-Z. Huang, X.-L. Zhao, W. He, J. K. Choi, and Y.-B. Zheng, “Hyperspectral image denoising: Reconciling sparse and low-tensor-ring-rank priors in the transformed domain,” IEEE Trans. Geosci. Remote Sens., vol. 61, pp. 1–13, 2023.
  32. F. Xiong, J. Zhou, Q. Zhao, J. Lu, and Y. Qian, “MAC-Net: Model-aided nonlocal neural network for hyperspectral image denoising,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–14, 2022.
  33. T. Bodrito, A. Zouaoui, J. Chanussot, and J. Mairal, “A trainable spectral-spatial sparse coding model for hyperspectral image restoration,” in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 34, 2021, pp. 5430–5442.
  34. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, 2017.
  35. A. Maffei, J. M. Haut, M. E. Paoletti, J. Plaza, L. Bruzzone, and A. Plaza, “A single model CNN for hyperspectral image denoising,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2516–2529, 2020.
  36. W. Liu and J. Lee, “A 3-D atrous convolution neural network for hyperspectral image denoising,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 8, pp. 5701–5715, 2019.
  37. W. Dong, H. Wang, F. Wu, G. Shi, and X. Li, “Deep spatial–spectral representation learning for hyperspectral image denoising,” IEEE Transactions on Computational Imaging, vol. 5, no. 4, pp. 635–648, 2019.
  38. G. Fu, F. Xiong, J. Lu, J. Zhou, and Y. Qian, “Nonlocal spatial–spectral neural network for hyperspectral image denoising,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–16, 2022.
  39. D. Hong, Z. Han, J. Yao, L. Gao, B. Zhang, A. Plaza, and J. Chanussot, “Spectralformer: Rethinking hyperspectral image classification with transformers,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–15, 2022.
  40. S. Zhang, J. Zhang, X. Wang, J. Wang, and Z. Wu, “ELS2T: Efficient lightweight spectral-spatial transformer for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., 2023.
  41. H. Yu, Z. Xu, K. Zheng, D. Hong, H. Yang, and M. Song, “MSTNet: A multilevel spectral–spatial transformer network for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–13, 2022.
  42. H. Yang, H. Yu, K. Zheng, J. Hu, T. Tao, and Q. Zhang, “Hyperspectral image classification based on interactive transformer and cnn with multilevel feature fusion network,” IEEE Geosci. Remote Sens. Lett., vol. 20, pp. 1–5, 2023.
  43. M. Jiang, Y. Su, L. Gao, A. Plaza, X.-L. Zhao, X. Sun, and G. Liu, “GraphGST: Graph generative structure-aware transformer for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 62, pp. 1–16, 2024.
  44. F. Wang, J. Li, Q. Yuan, and L. Zhang, “Local–global feature-aware transformer based residual network for hyperspectral image denoising,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–19, 2022.
  45. M. Li, J. Liu, Y. Fu, Y. Zhang, and D. Dou, “Spectral enhanced rectangle transformer for hyperspectral image denoising,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2023, pp. 5805–5814.
  46. L. Chen, G. Vivone, J. Qin, J. Chanussot, and X. Yang, “Spectral–spatial transformer for hyperspectral image sharpening,” IEEE Trans. Neural Netw. Learn. Syst., 2023.
  47. Z. Liu, J. Ning, Y. Cao, Y. Wei, Z. Zhang, S. Lin, and H. Hu, “Video swin transformer,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, pp. 3202–3211.
  48. J. He, L. Zhao, H. Yang, M. Zhang, and W. Li, “HSI-BERT: Hyperspectral image classification using the bidirectional encoder representation from transformers,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 1, pp. 165–178, 2020.
  49. L. Zhu, B. Liao, Q. Zhang, X. Wang, W. Liu, and X. Wang, “Vision mamba: Efficient visual representation learning with bidirectional state space model,” arXiv preprint arXiv:2401.09417, 2024.
  50. G. Chen, Y. Huang, J. Xu, B. Pei, Z. Chen, Z. Li, J. Wang, K. Li, T. Lu, and L. Wang, “Video mamba suite: State space model as a versatile alternative for video understanding,” arXiv preprint arXiv:2403.09626, 2024.
  51. J. Ma, F. Li, and B. Wang, “U-mamba: Enhancing long-range dependency for biomedical image segmentation,” arXiv preprint arXiv:2401.04722, 2024.
  52. Y. Shi, B. Xia, X. Jin, X. Wang, T. Zhao, X. Xia, X. Xiao, and W. Yang, “VmambaIR: Visual state space model for image restoration,” arXiv preprint arXiv:2403.11423, 2024.
  53. Y. Chen, X. Cao, Q. Zhao, D. Meng, and Z. Xu, “Denoising hyperspectral image with non-i.i.d. noise structure,” IEEE Trans. Cybern., vol. 48, no. 3, pp. 1054–1066, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Guanyiman Fu (2 papers)
  2. Fengchao Xiong (5 papers)
  3. Jianfeng Lu (273 papers)
  4. Jun Zhou (370 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.