RSBuilding: Towards General Remote Sensing Image Building Extraction and Change Detection with Foundation Model (2403.07564v2)
Abstract: The intelligent interpretation of buildings plays a significant role in urban planning and management, macroeconomic analysis, population dynamics, etc. Remote sensing image building interpretation primarily encompasses building extraction and change detection. However, current methodologies often treat these two tasks as separate entities, thereby failing to leverage shared knowledge. Moreover, the complexity and diversity of remote sensing image scenes pose additional challenges, as most algorithms are designed to model individual small datasets, thus lacking cross-scene generalization. In this paper, we propose a comprehensive remote sensing image building understanding model, termed RSBuilding, developed from the perspective of the foundation model. RSBuilding is designed to enhance cross-scene generalization and task universality. Specifically, we extract image features based on the prior knowledge of the foundation model and devise a multi-level feature sampler to augment scale information. To unify task representation and integrate image spatiotemporal clues, we introduce a cross-attention decoder with task prompts. Addressing the current shortage of datasets that incorporate annotations for both tasks, we have developed a federated training strategy to facilitate smooth model convergence even when supervision for some tasks is missing, thereby bolstering the complementarity of different tasks. Our model was trained on a dataset comprising up to 245,000 images and validated on multiple building extraction and change detection datasets. The experimental results substantiate that RSBuilding can concurrently handle two structurally distinct tasks and exhibits robust zero-shot generalization capabilities.
- K. Chen, Z. Zou, and Z. Shi, “Building extraction from remote sensing images with sparse token transformers,” Remote Sensing, vol. 13, no. 21, p. 4441, 2021.
- K. Li, X. Cao, and D. Meng, “A new learning paradigm for foundation model-based remote-sensing change detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–12, 2024.
- H. Chen, Z. Qi, and Z. Shi, “Remote sensing image change detection with transformers,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2021.
- K. Chen, C. Liu, W. Li, Z. Liu, H. Chen, H. Zhang, Z. Zou, and Z. Shi, “Time travelling pixels: Bitemporal features integration with foundation model for remote sensing image change detection,” arXiv preprint arXiv:2312.16202, 2023.
- K. Chen, C. Liu, H. Chen, H. Zhang, W. Li, Z. Zou, and Z. Shi, “Rsprompter: Learning to prompt for remote sensing instance segmentation based on visual foundation model,” IEEE Transactions on Geoscience and Remote Sensing, 2024.
- H. Chen and Z. Shi, “A spatial-temporal attention-based method and a new dataset for remote sensing image change detection,” Remote Sensing, vol. 12, no. 10, p. 1662, 2020.
- A. Asokan and J. Anitha, “Change detection techniques for remote sensing applications: A survey,” Earth Science Informatics, vol. 12, pp. 143–160, 2019.
- L. Luo, P. Li, and X. Yan, “Deep learning-based building extraction from remote sensing images: A comprehensive review,” Energies, vol. 14, no. 23, p. 7982, 2021.
- L. Wang, S. Fang, X. Meng, and R. Li, “Building extraction with vision transformer,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–11, 2022.
- M. Ghanea, P. Moallem, and M. Momeni, “Building extraction from high-resolution satellite images in urban areas: Recent methods and strategies against significant challenges,” International journal of remote sensing, vol. 37, no. 21, pp. 5234–5248, 2016.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- A. Shafique, G. Cao, Z. Khan, M. Asad, and M. Aslam, “Deep learning-based change detection in remote sensing images: A review,” Remote Sensing, vol. 14, no. 4, p. 871, 2022.
- T. Bai, L. Wang, D. Yin, K. Sun, Y. Chen, W. Li, and D. Li, “Deep learning for change detection in remote sensing: a review,” Geo-spatial Information Science, vol. 26, no. 3, pp. 262–288, 2023.
- W. G. C. Bandara and V. M. Patel, “A transformer-based siamese network for change detection,” in IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2022, pp. 207–210.
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning. PMLR, 2021, pp. 8748–8763.
- A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
- H. Chen, H. Zhang, K. Chen, C. Zhou, S. Chen, Z. Zou, and Z. Shi, “Continuous cross-resolution remote sensing image change detection,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
- K. Chen, X. Jiang, Y. Hu, X. Tang, Y. Gao, J. Chen, and W. Xie, “Ovarnet: Towards open-vocabulary object attribute recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 23 518–23 527.
- K. Chen, W. Li, S. Lei, J. Chen, X. Jiang, Z. Zou, and Z. Shi, “Continuous remote sensing image super-resolution based on context interaction in implicit function space,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
- K. Chen, W. Li, J. Chen, Z. Zou, and Z. Shi, “Resolution-agnostic remote sensing scene classification with implicit neural representations,” IEEE Geoscience and Remote Sensing Letters, vol. 20, pp. 1–5, 2022.
- B. Sirmacek and C. Unsalan, “Building detection from aerial images using invariant color features and shadow information,” in 2008 23rd international symposium on computer and information sciences. IEEE, 2008, pp. 1–5.
- Y. Zhang, “Optimisation of building detection in satellite images by combining multispectral classification and texture filtering,” ISPRS journal of photogrammetry and remote sensing, vol. 54, no. 1, pp. 50–60, 1999.
- S.-h. Zhong, J.-j. Huang, and W.-x. Xie, “A new method of building detection from a single aerial photograph,” in 2008 9th international conference on signal processing. IEEE, 2008, pp. 1219–1222.
- Y. Li and H. Wu, “Adaptive building edge detection by combining lidar data and aerial images,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 37, no. Part B1, pp. 197–202, 2008.
- G. Ferraioli, “Multichannel insar building edge detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 1224–1231, 2009.
- P. S. Tiwari and H. Pande, “Use of laser range and height texture cues for building identification,” Journal of the Indian Society of Remote Sensing, vol. 36, pp. 227–234, 2008.
- M. Awrangjeb, C. Zhang, and C. S. Fraser, “Improved building detection using texture information,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, pp. 143–148, 2013.
- J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.
- S. Wei, S. Ji, and M. Lu, “Toward automatic building footprint delineation from aerial images using cnn and regularization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 3, pp. 2178–2189, 2019.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
- E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “Segformer: Simple and efficient design for semantic segmentation with transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 077–12 090, 2021.
- D. Wen, X. Huang, F. Bovolo, J. Li, X. Ke, A. Zhang, and J. A. Benediktsson, “Change detection from very-high-spatial-resolution optical remote sensing images: Methods, applications, and future directions,” IEEE Geoscience and Remote Sensing Magazine, vol. 9, no. 4, pp. 68–101, 2021.
- Y. Feng, J. Jiang, H. Xu, and J. Zheng, “Change detection on remote sensing images using dual-branch multilevel intertemporal network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2023.
- R. C. Daudt, B. Le Saux, and A. Boulch, “Fully convolutional siamese networks for change detection,” in 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018, pp. 4063–4067.
- S. Fang, K. Li, and Z. Li, “Changer: Feature interaction is what you need for change detection,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
- C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. Le, Y.-H. Sung, Z. Li, and T. Duerig, “Scaling up visual and vision-language representation learning with noisy text supervision,” in International Conference on Machine Learning. PMLR, 2021, pp. 4904–4916.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
- A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving language understanding by generative pre-training,” 2018.
- A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- OpenAI, “Gpt-4 technical report,” 2023.
- N. Ding, Y. Qin, G. Yang, F. Wei, Z. Yang, Y. Su, S. Hu, Y. Chen, C.-M. Chan, W. Chen et al., “Parameter-efficient fine-tuning of large-scale pre-trained language models,” Nature Machine Intelligence, vol. 5, no. 3, pp. 220–235, 2023.
- L. Salewski, S. Alaniz, I. Rio-Torto, E. Schulz, and Z. Akata, “In-context impersonation reveals large language models’ strengths and biases,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems, vol. 35, pp. 27 730–27 744, 2022.
- R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 10 684–10 695.
- J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds et al., “Flamingo: a visual language model for few-shot learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 23 716–23 736, 2022.
- D. Zhu, J. Chen, X. Shen, X. Li, and M. Elhoseiny, “Minigpt-4: Enhancing vision-language understanding with advanced large language models,” arXiv preprint arXiv:2304.10592, 2023.
- X. Sun, P. Wang, W. Lu, Z. Zhu, X. Lu, Q. He, J. Li, X. Rong, Z. Yang, H. Chang et al., “Ringmo: A remote sensing foundation model with masked image modeling,” IEEE Transactions on Geoscience and Remote Sensing, 2022.
- A. A. Aleissaee, A. Kumar, R. M. Anwer, S. Khan, H. Cholakkal, G.-S. Xia, and F. S. Khan, “Transformers in remote sensing: A survey,” Remote Sensing, vol. 15, no. 7, p. 1860, 2023.
- W. Shi, M. Zhang, R. Zhang, S. Chen, and Z. Zhan, “Change detection based on artificial intelligence: State-of-the-art and challenges,” Remote Sensing, vol. 12, no. 10, p. 1688, 2020.
- Z. Li, C. Tang, X. Liu, W. Zhang, J. Dou, L. Wang, and A. Y. Zomaya, “Lightweight remote sensing change detection with progressive feature aggregation and supervised attention,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–12, 2023.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
- Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
- T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117–2125.
- C. Pang, J. Wu, J. Ding, C. Song, and G.-S. Xia, “Detecting building changes with off-nadir aerial images,” Science China Information Sciences, vol. 66, no. 4, p. 140306, 2023.
- “xview2,” https://www.xview2.org/, accessed: 2023-11-10.
- S. Ji, S. Wei, and M. Lu, “Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set,” IEEE Transactions on geoscience and remote sensing, vol. 57, no. 1, pp. 574–586, 2018.
- E. Maggiori, Y. Tarabalka, G. Charpiat, and P. Alliez, “Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark,” in 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). IEEE, 2017, pp. 3226–3229.
- AIcrowd, “Mapping challenge,” https://www.aicrowd.com/challenges/mapping-challenge, 2023, accessed: 2023-11-10.
- O. AI, “2018 open ai tanzania building footprint segmentation challenge,” https://competitions.codalab.org/competitions/20100, 2018, accessed: 2023-11-10.
- S. Holail, T. Saleh, X. Xiao, and D. Li, “Afde-net: Building change detection using attention-based feature differential enhancement for satellite imagery,” IEEE Geoscience and Remote Sensing Letters, 2023.
- H. Li, F. Zhu, X. Zheng, M. Liu, and G. Chen, “Mscdunet: A deep learning framework for built-up area change detection integrating multispectral, sar, and vhr data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 5163–5176, 2022.
- L. Shen, Y. Lu, H. Chen, H. Wei, D. Xie, J. Yue, R. Chen, S. Lv, and B. Jiang, “S2looking: A satellite side-looking dataset for building change detection,” Remote Sensing, vol. 13, no. 24, p. 5094, 2021.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255.
- M. Contributors, “MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark,” https://github.com/open-mmlab/mmsegmentation, 2020.
- L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
- E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “Segformer: Simple and efficient design for semantic segmentation with transformers,” arXiv preprint arXiv:2105.15203, 2021.
- S. Seong and J. Choi, “Semantic segmentation of urban buildings using a high-resolution network (hrnet) with channel and spatial attention gates,” Remote Sensing, vol. 13, no. 16, p. 3087, 2021.
- H. Zhang, Y. Liao, H. Yang, G. Yang, and L. Zhang, “A local–global dual-stream network for building extraction from very-high-resolution remote sensing images,” IEEE transactions on neural networks and learning systems, vol. 33, no. 3, pp. 1269–1283, 2020.
- Y. Liu, Z. Zhao, S. Zhang, and L. Huang, “Multiregion scale-aware network for building extraction from high-resolution remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–10, 2022.
- Y. Zhou, Z. Chen, B. Wang, S. Li, H. Liu, D. Xu, and C. Ma, “Bomsc-net: Boundary optimization and multi-scale context awareness based building extraction from high-resolution remote sensing imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–17, 2022.
- Z. Liu, Q. Shi, and J. Ou, “Lcs: A collaborative optimization framework of vector extraction and semantic segmentation for building extraction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022.
- L. Xu, Y. Li, J. Xu, Y. Zhang, and L. Guo, “Bctnet: Bi-branch cross-fusion transformer for building footprint extraction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–14, 2023.
- H. Guo, X. Su, C. Wu, B. Du, and L. Zhang, “Decoupling semantic and edge representations for building footprint extraction from remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
- S. Fang, K. Li, J. Shao, and Z. Li, “Snunet-cd: A densely connected siamese network for change detection of vhr images,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2021.
- A. Codegoni, G. Lombardi, and A. Ferrari, “Tinycd: A (not so) deep learning model for change detection,” arXiv preprint arXiv:2207.13159, 2022.
- C. Han, C. Wu, H. Guo, M. Hu, and H. Chen, “Hanet: A hierarchical attention network for change detection with bitemporal very-high-resolution remote sensing images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 3867–3878, 2023.
- P. Chen, B. Zhang, D. Hong, Z. Chen, X. Yang, and B. Li, “Fccdn: Feature constraint network for vhr image change detection,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 187, pp. 101–119, 2022.
- K. Yang, G.-S. Xia, Z. Liu, B. Du, W. Yang, M. Pelillo, and L. Zhang, “Semantic change detection with asymmetric siamese networks,” 2020.