RSCaMa: Remote Sensing Image Change Captioning with State Space Model (2404.18895v3)
Abstract: Remote Sensing Image Change Captioning (RSICC) aims to describe surface changes between multi-temporal remote sensing images in language, including the changed object categories, locations, and dynamics of changing objects (e.g., added or disappeared). This poses challenges to spatial and temporal modeling of bi-temporal features. Despite previous methods progressing in the spatial change perception, there are still weaknesses in joint spatial-temporal modeling. To address this, in this paper, we propose a novel RSCaMa model, which achieves efficient joint spatial-temporal modeling through multiple CaMa layers, enabling iterative refinement of bi-temporal features. To achieve efficient spatial modeling, we introduce the recently popular Mamba (a state space model) with a global receptive field and linear complexity into the RSICC task and propose the Spatial Difference-aware SSM (SD-SSM), overcoming limitations of previous CNN- and Transformer-based methods in the receptive field and computational complexity. SD-SSM enhances the model's ability to capture spatial changes sharply. In terms of efficient temporal modeling, considering the potential correlation between the temporal scanning characteristics of Mamba and the temporality of the RSICC, we propose the Temporal-Traversing SSM (TT-SSM), which scans bi-temporal features in a temporal cross-wise manner, enhancing the model's temporal understanding and information interaction. Experiments validate the effectiveness of the efficient joint spatial-temporal modeling and demonstrate the outstanding performance of RSCaMa and the potential of the Mamba in the RSICC task. Additionally, we systematically compare three different language decoders, including Mamba, GPT-style decoder, and Transformer decoder, providing valuable insights for future RSICC research. The code will be available at \emph{\url{https://github.com/Chen-Yang-Liu/RSCaMa}}
- L. Wang, M. Zhang, X. Gao, and W. Shi, “Advances and challenges in deep learning-based change detection for remote sensing images: A review through various learning paradigms,” Remote Sensing, vol. 16, no. 5, p. 804, 2024.
- S. H. Khan, X. He, F. Porikli, and M. Bennamoun, “Forest change detection in incomplete satellite images with deep neural networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 9, pp. 5407–5423, 2017.
- L. Yang, G. Xian, J. M. Klaver, and B. Deal, “Urban land-cover change detection through sub-pixel imperviousness mapping using remotely sensed data,” Photogrammetric Engineering & Remote Sensing, vol. 69, no. 9, pp. 1003–1010, 2003.
- G. Hoxha, S. Chouaf, F. Melgani, and Y. Smara, “Change captioning: A new paradigm for multitemporal remote sensing image analysis,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–1, 2022.
- C. Liu, R. Zhao, H. Chen, Z. Zou, and Z. Shi, “Remote sensing image change captioning with dual-branch transformers: A new method and a large scale dataset,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–20, 2022.
- D. H. Park, T. Darrell, and A. Rohrbach, “Robust change captioning,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4623–4632.
- Y. Qiu, S. Yamamoto, K. Nakashima, R. Suzuki, K. Iwata, H. Kataoka, and Y. Satoh, “Describing and localizing multiple changes with transformers,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1951–1960.
- S. Chang and P. Ghamisi, “Changes to captions: An attentive network for remote sensing change captioning,” arXiv preprint arXiv:2304.01091, 2023.
- C. Liu, J. Yang, Z. Qi, Z. Zou, and Z. Shi, “Progressive scale-aware network for remote sensing image change captioning,” arXiv preprint arXiv:2303.00355, 2023.
- C. Liu, R. Zhao, J. Chen, Z. Qi, Z. Zou, and Z. Shi, “A decoupling paradigm with prompt learning for remote sensing image change captioning,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
- C. Cai, Y. Wang, and K.-H. Yap, “Interactive change-aware transformer network for remote sensing image change captioning,” Remote Sensing, vol. 15, no. 23, p. 5611, 2023.
- A. Gu, K. Goel, and C. Ré, “Efficiently modeling long sequences with structured state spaces,” arXiv preprint arXiv:2111.00396, 2021.
- A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” arXiv preprint arXiv:2312.00752, 2023.
- X. He, K. Cao, K. Yan, R. Li, C. Xie, J. Zhang, and M. Zhou, “Pan-mamba: Effective pan-sharpening with state space model,” arXiv preprint arXiv:2402.12192, 2024.
- J. Yao, D. Hong, C. Li, and J. Chanussot, “Spectralmamba: Efficient mamba for hyperspectral image classification,” arXiv preprint arXiv:2404.08489, 2024.
- T. Chen, Z. Tan, T. Gong, Q. Chu, Y. Wu, B. Liu, J. Ye, and N. Yu, “Mim-istd: Mamba-in-mamba for efficient infrared small target detection,” arXiv preprint arXiv:2403.02148, 2024.
- Q. Zhu, Y. Cai, Y. Fang, Y. Yang, C. Chen, L. Fan, and A. Nguyen, “Samba: Semantic segmentation of remotely sensed images with state space model,” arXiv preprint arXiv:2404.01705, 2024.
- S. Zhao, H. Chen, X. Zhang, P. Xiao, L. Bai, and W. Ouyang, “Rs-mamba for large remote sensing image dense prediction,” arXiv preprint arXiv:2404.02668, 2024.
- X. Wang, S. Wang, Y. Ding, Y. Li, W. Wu, Y. Rong, W. Kong, J. Huang, S. Li, H. Yang, Z. Wang, B. Jiang, C. Li, Y. Wang, Y. Tian, and J. Tang, “State space model for new-generation network alternative to transformers: A survey,” 2024.
- B. N. Patro and V. S. Agneeswaran, “Mamba-360: Survey of state space models as transformer alternative for long sequence modelling: Methods, applications, and challenges,” arXiv preprint arXiv:2404.16112, 2024.
- L. Zhu, B. Liao, Q. Zhang, X. Wang, W. Liu, and X. Wang, “Vision mamba: Efficient visual representation learning with bidirectional state space model,” arXiv preprint arXiv:2401.09417, 2024.
- Z. Zhang, W. Zhang, M. Yan, X. Gao, K. Fu, and X. Sun, “Global visual feature and linguistic state guided attention for remote sensing image captioning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–16, 2022.
- Chenyang Liu (26 papers)
- Keyan Chen (34 papers)
- Bowen Chen (50 papers)
- Haotian Zhang (107 papers)
- Zhengxia Zou (52 papers)
- Zhenwei Shi (77 papers)