Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Change-Agent: Towards Interactive Comprehensive Remote Sensing Change Interpretation and Analysis (2403.19646v3)

Published 28 Mar 2024 in cs.CV

Abstract: Monitoring changes in the Earth's surface is crucial for understanding natural processes and human impacts, necessitating precise and comprehensive interpretation methodologies. Remote sensing satellite imagery offers a unique perspective for monitoring these changes, leading to the emergence of remote sensing image change interpretation (RSICI) as a significant research focus. Current RSICI technology encompasses change detection and change captioning, each with its limitations in providing comprehensive interpretation. To address this, we propose an interactive Change-Agent, which can follow user instructions to achieve comprehensive change interpretation and insightful analysis, such as change detection and change captioning, change object counting, change cause analysis, etc. The Change-Agent integrates a multi-level change interpretation (MCI) model as the eyes and a LLM as the brain. The MCI model contains two branches of pixel-level change detection and semantic-level change captioning, in which the BI-temporal Iterative Interaction (BI3) layer is proposed to enhance the model's discriminative feature representation capabilities. To support the training of the MCI model, we build the LEVIR-MCI dataset with a large number of change masks and captions of changes. Experiments demonstrate the SOTA performance of the MCI model in achieving both change detection and change description simultaneously, and highlight the promising application value of our Change-Agent in facilitating comprehensive interpretation of surface changes, which opens up a new avenue for intelligent remote sensing applications. To facilitate future research, we will make our dataset and codebase of the MCI model and Change-Agent publicly available at https://github.com/Chen-Yang-Liu/Change-Agent

Definition Search Book Streamline Icon: https://streamlinehq.com
References (70)
  1. L. Khelifi and M. Mignotte, “Deep learning for change detection in remote sensing images: Comprehensive review and meta-analysis,” IEEE Access, vol. 8, pp. 126 385–126 400, 2020.
  2. W. Shi, M. Zhang, R. Zhang, S. Chen, and Z. Zhan, “Change detection based on artificial intelligence: State-of-the-art and challenges,” Remote Sensing, vol. 12, no. 10, p. 1688, 2020.
  3. S. H. Khan, X. He, F. Porikli, and M. Bennamoun, “Forest change detection in incomplete satellite images with deep neural networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 9, pp. 5407–5423, 2017.
  4. I. R. Hegazy and M. R. Kaloop, “Monitoring urban growth and land use change detection with gis and remote sensing techniques in daqahlia governorate egypt,” International Journal of Sustainable Built Environment, vol. 4, no. 1, pp. 117–124, 2015.
  5. L. Wang, M. Zhang, X. Gao, and W. Shi, “Advances and challenges in deep learning-based change detection for remote sensing images: A review through various learning paradigms,” Remote Sensing, vol. 16, no. 5, p. 804, 2024.
  6. Q. Li, R. Zhong, X. Du, and Y. Du, “Transunetcd: A hybrid transformer network for change detection in optical remote-sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–19, 2022.
  7. Z. Zheng, Y. Zhong, S. Tian, A. Ma, and L. Zhang, “Changemask: Deep multi-task encoder-transformer-decoder architecture for semantic change detection,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 183, pp. 228–239, 2022.
  8. Y. Feng, J. Jiang, H. Xu, and J. Zheng, “Change detection on remote sensing images using dual-branch multilevel intertemporal network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2023.
  9. M. Zhang, Z. Liu, J. Feng, L. Liu, and L. Jiao, “Remote sensing image change detection based on deep multi-scale multi-attention siamese transformer network,” Remote Sensing, vol. 15, no. 3, p. 842, 2023.
  10. S. Chouaf, G. Hoxha, Y. Smara, and F. Melgani, “Captioning changes in bi-temporal remote sensing images,” in 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, pp. 2891–2894.
  11. G. Hoxha, S. Chouaf, F. Melgani, and Y. Smara, “Change captioning: A new paradigm for multitemporal remote sensing image analysis,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–1, 2022.
  12. C. Liu, R. Zhao, H. Chen, Z. Zou, and Z. Shi, “Remote sensing image change captioning with dual-branch transformers: A new method and a large scale dataset,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–20, 2022.
  13. S. Chang and P. Ghamisi, “Changes to captions: An attentive network for remote sensing change captioning,” arXiv preprint arXiv:2304.01091, 2023.
  14. C. Liu, R. Zhao, J. Chen, Z. Qi, Z. Zou, and Z. Shi, “A decoupling paradigm with prompt learning for remote sensing image change captioning,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
  15. T. OpenAI, “Chatgpt: Optimizing language models for dialogue,” OpenAI, 2022.
  16. H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023.
  17. X. Han, Z. Zhang, N. Ding, Y. Gu, X. Liu, Y. Huo, J. Qiu, Y. Yao, A. Zhang, L. Zhang et al., “Pre-trained models: Past, present and future,” AI Open, vol. 2, pp. 225–250, 2021.
  18. P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–35, 2023.
  19. W. A. Malila, “Change vector analysis: An approach for detecting forest changes with landsat,” in LARS symposia, 1980, p. 385.
  20. O. A. Carvalho Júnior, R. F. Guimarães, A. R. Gillespie, N. C. Silva, and R. A. Gomes, “A new approach to change vector analysis using distance and similarity measures,” Remote Sensing, vol. 3, no. 11, pp. 2473–2493, 2011.
  21. J. Deng, K. Wang, Y. Deng, and G. Qi, “Pca-based land-use change detection and analysis using multitemporal and multisensor satellite data,” International Journal of Remote Sensing, vol. 29, no. 16, pp. 4823–4838, 2008.
  22. T. Celik, “Unsupervised change detection in satellite images using principal component analysis and k𝑘kitalic_k-means clustering,” IEEE geoscience and remote sensing letters, vol. 6, no. 4, pp. 772–776, 2009.
  23. A. A. Nielsen, K. Conradsen, and J. J. Simpson, “Multivariate alteration detection (mad) and maf postprocessing in multispectral, bitemporal image data: New approaches to change detection studies,” Remote Sensing of Environment, vol. 64, no. 1, pp. 1–19, 1998.
  24. A. A. Nielsen, “The regularized iteratively reweighted mad method for change detection in multi-and hyperspectral data,” IEEE Transactions on Image processing, vol. 16, no. 2, pp. 463–478, 2007.
  25. O. Abd El-Kawy, J. Rød, H. Ismail, and A. Suliman, “Land use and land cover change detection in the western nile delta of egypt using remote sensing data,” Applied geography, vol. 31, no. 2, pp. 483–494, 2011.
  26. T. Chou, T. Lei, S. Wan, and L. Yang, “Spatial knowledge databases as applied to the detection of changes in urban land use,” International Journal of Remote Sensing, vol. 26, no. 14, pp. 3047–3068, 2005.
  27. A. P. Tewkesbury, A. J. Comber, N. J. Tate, A. Lamb, and P. F. Fisher, “A critical synthesis of remotely sensed optical image change detection techniques,” Remote Sensing of Environment, vol. 160, pp. 1–14, 2015.
  28. Z. Li, C. Yan, Y. Sun, and Q. Xin, “A densely attentive refinement network for change detection based on very-high-resolution bitemporal remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–18, 2022.
  29. Z. Lv, H. Huang, X. Li, M. Zhao, J. A. Benediktsson, W. Sun, and N. Falco, “Land cover change detection with heterogeneous remote sensing images: Review, progress, and perspective,” Proceedings of the IEEE, vol. 110, no. 12, pp. 1976–1991, 2022.
  30. H. Zhang, H. Chen, C. Zhou, K. Chen, C. Liu, Z. Zou, and Z. Shi, “Bifa: Remote sensing image change detection with bitemporal feature alignment,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–17, 2024.
  31. K. Chen, C. Liu, W. Li, Z. Liu, H. Chen, H. Zhang, Z. Zou, and Z. Shi, “Time travelling pixels: Bitemporal features integration with foundation model for remote sensing image change detection,” arXiv preprint arXiv:2312.16202, 2023.
  32. Z. Lv, P. Zhong, W. Wang, Z. You, and N. Falco, “Multi-scale attention network guided with change gradient image for land cover change detection using remote sensing images,” IEEE Geoscience and Remote Sensing Letters, 2023.
  33. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 770–778.
  34. R. C. Daudt, B. Le Saux, and A. Boulch, “Fully convolutional siamese networks for change detection,” in 2018 25th IEEE International Conference on Image Processing (ICIP).   IEEE, 2018, pp. 4063–4067.
  35. Y. Zhan, K. Fu, M. Yan, X. Sun, H. Wang, and X. Qiu, “Change detection based on deep siamese convolutional network for optical aerial images,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 10, pp. 1845–1849, 2017.
  36. D. Peng, Y. Zhang, and H. Guan, “End-to-end change detection for high resolution satellite images using improved unet++,” Remote Sensing, vol. 11, no. 11, p. 1382, 2019.
  37. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  38. X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” Advances in neural information processing systems, vol. 28, 2015.
  39. K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
  40. S. Sun, L. Mu, L. Wang, and P. Liu, “L-unet: An lstm network for remote sensing image change detection,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
  41. H. Chen, C. Wu, B. Du, L. Zhang, and L. Wang, “Change detection in multisource vhr images via deep siamese convolutional multiple-layers recurrent neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 4, pp. 2848–2864, 2019.
  42. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.
  43. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  44. P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” in Proceedings of the 25th international conference on Machine learning, 2008, pp. 1096–1103.
  45. H. Chen, Z. Qi, and Z. Shi, “Remote sensing image change detection with transformers,” IEEE Transactions on Geoscience and Remote Sensing, 2021.
  46. C. Zhang, L. Wang, S. Cheng, and Y. Li, “Swinsunet: Pure transformer network for remote sensing image change detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–13, 2022.
  47. M. Liu, Q. Shi, Z. Chai, and J. Li, “Pa-former: Learning prior-aware transformer for remote sensing building change detection,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
  48. L. Ding, J. Zhang, H. Guo, K. Zhang, B. Liu, and L. Bruzzone, “Joint spatio-temporal modeling for semantic change detection in remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, 2024.
  49. W. G. C. Bandara and V. M. Patel, “A transformer-based siamese network for change detection,” in IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium.   IEEE, 2022, pp. 207–210.
  50. E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “Segformer: Simple and efficient design for semantic segmentation with transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 077–12 090, 2021.
  51. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  52. C. Liu, J. Yang, Z. Qi, Z. Zou, and Z. Shi, “Progressive scale-aware network for remote sensing image change captioning,” arXiv preprint arXiv:2303.00355, 2023.
  53. C. Cai, Y. Wang, and K.-H. Yap, “Interactive change-aware transformer network for remote sensing image change captioning,” Remote Sensing, vol. 15, no. 23, p. 5611, 2023.
  54. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning.   pmlr, 2015, pp. 448–456.
  55. J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.
  56. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  57. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
  58. Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, “Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  59. T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettlemoyer, N. Cancedda, and T. Scialom, “Toolformer: Language models can teach themselves to use tools,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  60. T. Gupta and A. Kembhavi, “Visual programming: Compositional visual reasoning without training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14 953–14 962.
  61. S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, “React: Synergizing reasoning and acting in language models,” arXiv preprint arXiv:2210.03629, 2022.
  62. B. Xu, Z. Peng, B. Lei, S. Mukherjee, Y. Liu, and D. Xu, “Rewoo: Decoupling reasoning from observations for efficient augmented language models,” arXiv preprint arXiv:2305.18323, 2023.
  63. S. Hong, X. Zheng, J. Chen, Y. Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou et al., “Metagpt: Meta programming for multi-agent collaborative framework,” arXiv preprint arXiv:2308.00352, 2023.
  64. D. H. Park, T. Darrell, and A. Rohrbach, “Robust change captioning,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4623–4632.
  65. Papineni, Kishore, Roukos, Salim, Ward, Todd, Zhu, and Wei-Jing, “Bleu: A method for automatic evaluation of machine translation,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ser. ACL ’02.   USA: Association for Computational Linguistics, 2002, p. 311–318. [Online]. Available: https://doi.org/10.3115/1073083.1073135
  66. A. Lavie and A. Agarwal, “Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments,” in Proceedings of the Second Workshop on Statistical Machine Translation, ser. StatMT ’07.   USA: Association for Computational Linguistics, 2007, p. 228–231.
  67. Lin and C. Yew, “Rouge: A package for automatic evaluation of summaries,” in Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004), 2004.
  68. R. Vedantam, C. L. Zitnick, and D. Parikh, “Cider: Consensus-based image description evaluation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 4566–4575.
  69. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  70. Y. Qiu, S. Yamamoto, K. Nakashima, R. Suzuki, K. Iwata, H. Kataoka, and Y. Satoh, “Describing and localizing multiple changes with transformers,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1951–1960.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chenyang Liu (26 papers)
  2. Keyan Chen (34 papers)
  3. Haotian Zhang (107 papers)
  4. Zipeng Qi (15 papers)
  5. Zhengxia Zou (52 papers)
  6. Zhenwei Shi (77 papers)
Citations (15)