Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Aggregate Multi-Scale Context for Instance Segmentation in Remote Sensing Images (2111.11057v4)

Published 22 Nov 2021 in cs.CV

Abstract: The task of instance segmentation in remote sensing images, aiming at performing per-pixel labeling of objects at instance level, is of great importance for various civil applications. Despite previous successes, most existing instance segmentation methods designed for natural images encounter sharp performance degradations when they are directly applied to top-view remote sensing images. Through careful analysis, we observe that the challenges mainly come from the lack of discriminative object features due to severe scale variations, low contrasts, and clustered distributions. In order to address these problems, a novel context aggregation network (CATNet) is proposed to improve the feature extraction process. The proposed model exploits three lightweight plug-and-play modules, namely dense feature pyramid network (DenseFPN), spatial context pyramid (SCP), and hierarchical region of interest extractor (HRoIE), to aggregate global visual context at feature, spatial, and instance domains, respectively. DenseFPN is a multi-scale feature propagation module that establishes more flexible information flows by adopting inter-level residual connections, cross-level dense connections, and feature re-weighting strategy. Leveraging the attention mechanism, SCP further augments the features by aggregating global spatial context into local regions. For each instance, HRoIE adaptively generates RoI features for different downstream tasks. Extensive evaluations of the proposed scheme on iSAID, DIOR, NWPU VHR-10, and HRSID datasets demonstrate that the proposed approach outperforms state-of-the-arts under similar computational costs. Source code and pre-trained models are available at https://github.com/yeliudev/CATNet.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (77)
  1. L. Liu, Z. Yang, G. Li, K. Wang, T. Chen, and L. Lin, “Aerial images meet crowdsourced trajectories: A new approach to robust road extraction,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 7, pp. 3308–3322, 2022.
  2. G. Ding, D. Yang, T. Wang, S. Wang, and Y. Zhang, “Crowd counting via unsupervised cross-domain feature adaptation,” IEEE Transactions on Multimedia, vol. 25, pp. 4665–4678, 2022.
  3. G. Ding, M. Cui, D. Yang, T. Wang, S. Wang, and Y. Zhang, “Object counting for remote-sensing images via adaptive density map-assisted learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–11, 2022.
  4. L. Liu, B. X. Yu, J. Chang, Q. Tian, and C. W. Chen, “Prompt-matched semantic segmentation,” Tech. Rep. arXiv:2208.10159, 2022.
  5. L. Liu, M. Liu, G. Li, Z. Wu, J. Lin, and L. Lin, “Road network-guided fine-grained urban traffic flow inference,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  6. Y. Luo, Y. Liu, F.-l. Chung, Y. Liu, and C. W. Chen, “End-to-end personalized next location recommendation via contrastive user preference modeling,” Tech. Rep. arXiv:2303.12507, 2023.
  7. S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2015, pp. 91–99.
  8. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2017, pp. 2980–2988.
  9. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2017, pp. 2961–2969.
  10. K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang et al., “Hybrid task cascade for instance segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4974–4983.
  11. A. Kirillov, Y. Wu, K. He, and R. Girshick, “Pointrend: Image segmentation as rendering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  12. X. Wang, T. Kong, C. Shen, Y. Jiang, and L. Li, “Solo: Segmenting objects by locations,” Tech. Rep., 2020.
  13. G. Cheng, J. Han, P. Zhou, and L. Guo, “Multi-class geospatial object detection and geographic image classification based on collection of part detectors,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 98, pp. 119–132, 2014.
  14. G.-S. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, “Dota: A large-scale dataset for object detection in aerial images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3974–3983.
  15. S. Waqas Zamir, A. Arora, A. Gupta, S. Khan, G. Sun, F. Shahbaz Khan, F. Zhu, L. Shao, G.-S. Xia, and X. Bai, “isaid: A large-scale dataset for instance segmentation in aerial images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 28–37.
  16. K. Li, G. Wan, G. Cheng, L. Meng, and J. Han, “Object detection in optical remote sensing images: A survey and a new benchmark,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 159, pp. 296–307, 2020.
  17. S. Wei, X. Zeng, Q. Qu, M. Wang, H. Su, and J. Shi, “Hrsid: A high-resolution sar images dataset for ship detection and instance segmentation,” IEEE Access, vol. 8, pp. 120 234–120 254, 2020.
  18. X. Yang, J. Yan, X. Yang, J. Tang, W. Liao, and T. He, “Scrdet++: Detecting small, cluttered and rotated objects via instance-level feature denoising and rotation loss smoothing,” Tech. Rep. arXiv:2004.13316, 2020.
  19. J. Zhang, C. Xie, X. Xu, Z. Shi, and B. Pan, “A contextual bidirectional enhancement method for remote sensing image object detection,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 4518–4531, 2020.
  20. G. Cheng, Y. Si, H. Hong, X. Yao, and L. Guo, “Cross-scale feature fusion for object detection in optical remote sensing images,” IEEE Geoscience and Remote Sensing Letters, 2020.
  21. H. Lin, J. Zhou, Y. Gan, C.-M. Vong, and Q. Liu, “Novel up-scale feature aggregation for object detection in aerial images,” Neurocomputing, vol. 411, pp. 364–374, 2020.
  22. X. Zeng, S. Wei, J. Wei, Z. Zhou, J. Shi, X. Zhang, and F. Fan, “Cpisnet: Delving into consistent proposals of instance segmentation network for high-resolution aerial images,” Remote Sensing, vol. 13, no. 14, p. 2788, 2021.
  23. X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7794–7803.
  24. G. Zhang, S. Lu, and W. Zhang, “Cad-net: A context-aware detection network for objects in remote sensing imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 12, pp. 10 015–10 024, 2019.
  25. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  26. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4700–4708.
  27. D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee, “Yolact: Real-time instance segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9157–9166.
  28. S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8759–8768.
  29. N. He, L. Fang, S. Li, J. Plaza, and A. Plaza, “Skip-connected covariance network for remote sensing scene classification,” IEEE transactions on neural networks and learning systems, vol. 31, no. 5, pp. 1461–1474, 2019.
  30. Q. Wang, W. Huang, Z. Xiong, and X. Li, “Looking closer at the scene: Multiscale representation learning for remote sensing image scene classification,” IEEE Transactions on Neural Networks and Learning Systems, 2020.
  31. Q. Zhao, S. Lyu, Y. Li, Y. Ma, and L. Chen, “Mgml: Multigranularity multilevel feature ensemble network for remote sensing scene classification,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  32. W. Ma, Y. Li, H. Zhu, H. Ma, L. Jiao, J. Shen, and B. Hou, “A multi-scale progressive collaborative attention network for remote sensing fusion classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 8, pp. 3897–3911, 2021.
  33. Q. Lin, J. Zhao, G. Fu, and Z. Yuan, “Crpn-sfnet: A high-performance object detector on large-scale remote sensing images,” IEEE Transactions on Neural Networks and Learning Systems, 2020.
  34. G. Wang, Y. Zhuang, H. Chen, X. Liu, T. Zhang, L. Li, S. Dong, and Q. Sang, “Fsod-net: Full-scale object detection from optical remote sensing imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–18, 2021.
  35. T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2117–2125.
  36. K. Chen, Y. Cao, C. C. Loy, D. Lin, and C. Feichtenhofer, “Feature pyramid grids,” Tech. Rep. arXiv:2004.03580, 2020.
  37. J. Cao, Y. Pang, S. Zhao, and X. Li, “High-level semantic networks for multi-scale object detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 10, pp. 3372–3386, 2019.
  38. Y. Li, Y. Chen, N. Wang, and Z. Zhang, “Scale-aware trident networks for object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6054–6063.
  39. Y. Li, Y. Pang, J. Shen, J. Cao, and L. Shao, “Netnet: Neighbor erasing and transferring network for better single shot object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13 349–13 358.
  40. X. Wang, S. Zhang, Z. Yu, L. Feng, and W. Zhang, “Scale-equalizing pyramid convolution for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13 359–13 368.
  41. S. Qiao, L.-C. Chen, and A. Yuille, “Detectors: Detecting objects with recursive feature pyramid and switchable atrous convolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 10 213–10 224.
  42. G. Ghiasi, T.-Y. Lin, and Q. V. Le, “Nas-fpn: Learning scalable feature pyramid architecture for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7036–7045.
  43. B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8697–8710.
  44. M. Tan, R. Pang, and Q. V. Le, “Efficientdet: Scalable and efficient object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  45. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7132–7141.
  46. Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu, “Ccnet: Criss-cross attention for semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 603–612.
  47. Y. Cao, J. Xu, S. Lin, F. Wei, and H. Hu, “Global context networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 6, pp. 6881–6895, 2023.
  48. X. Zhu, D. Cheng, Z. Zhang, S. Lin, and J. Dai, “An empirical study of spatial attention mechanisms in deep networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6688–6697.
  49. Y. Lee and J. Park, “Centermask: Real-time anchor-free instance segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13 906–13 915.
  50. H. Chen, K. Sun, Z. Tian, C. Shen, Y. Huang, and Y. Yan, “Blendmask: Top-down meets bottom-up for instance segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  51. Z. Chen, Y. Shang, A. Python, Y. Cai, and J. Yin, “Db-blendmask: Decomposed attention and balanced blendmask for instance segmentation of high-resolution remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2021.
  52. Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6154–6162.
  53. P. Garg, A. S. Chakravarthy, M. Mandal, P. Narang, V. Chamola, and M. Guizani, “Isdnet: Ai-enabled instance segmentation of aerial scenes for smart cities,” ACM Transactions on Internet Technology, vol. 21, no. 3, pp. 1–18, 2021.
  54. Z. Huang, L. Huang, Y. Gong, C. Huang, and X. Wang, “Mask scoring r-cnn,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6409–6418.
  55. T. Vu, H. Kang, and C. D. Yoo, “Scnet: Training inference sample consistency for instance segmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021, pp. 2701–2709.
  56. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 580–587.
  57. G. Cheng, P. Zhou, and J. Han, “Learning rotation-invariant convolutional neural networks for object detection in vhr optical remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7405–7415, 2016.
  58. K. Li, G. Cheng, S. Bu, and X. You, “Rotation-insensitive and context-augmented object detection in remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 4, pp. 2337–2348, 2017.
  59. G. Cheng, P. Zhou, and J. Han, “Rifd-cnn: Rotation-invariant and fisher discriminative convolutional neural networks for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2884–2893.
  60. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in Proceedings of the European Conference on Computer Vision (ECCV), 2016, pp. 21–37.
  61. J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” Tech. Rep. arXiv:1804.02767, 2018.
  62. S. Wang, Y. Gong, J. Xing, L. Huang, C. Huang, and W. Hu, “Rdsnet: A new deep architecture for reciprocal object detection and instance segmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020.
  63. Z. Tian, C. Shen, and H. Chen, “Conditional convolutions for instance segmentation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2020, pp. 282–298.
  64. H. Su, S. Wei, S. Liu, J. Liang, C. Wang, J. Shi, and X. Zhang, “Hq-isnet: High-quality instance segmentation for remote sensing imagery,” Remote Sensing, vol. 12, no. 6, p. 989, 2020.
  65. X. Zeng, S. Wei, J. Shi, and X. Zhang, “A lightweight adaptive roi extraction network for precise aerial image instance segmentation,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–17, 2021.
  66. X. Ke, X. Zhang, and T. Zhang, “Gcbanet: A global context boundary-aware network for sar ship instance segmentation,” Remote Sensing, vol. 14, no. 9, p. 2165, 2022.
  67. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems (NeurIPS), 2017, pp. 5998–6008.
  68. L. Rossi, A. Karimi, and A. Prati, “A novel region of interest extraction layer for instance segmentation,” in Proceedings of the IEEE International Conference on Pattern Recognition (ICPR), 2021, pp. 2203–2209.
  69. J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang et al., “Deep high-resolution representation learning for visual recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
  70. J. Wang, K. Chen, R. Xu, Z. Liu, C. C. Loy, and D. Lin, “Carafe++: Unified content-aware reassembly of features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  71. J. Pang, K. Chen, J. Shi, H. Feng, W. Ouyang, and D. Lin, “Libra r-cnn: Towards balanced learning for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 821–830.
  72. C. Guo, B. Fan, Q. Zhang, S. Xiang, and C. Pan, “Augfpn: Improving multi-scale feature learning for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 12 595–12 604.
  73. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2012, pp. 1097–1105.
  74. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the International Conference on Machine Learning (ICML), 2015.
  75. N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, “Soft-nms – improving object detection with one line of code,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2017, pp. 5561–5569.
  76. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” in Advances in Neural Information Processing Systems (NeurIPS) Workshops, 2017.
  77. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Machine Learning Research, vol. 9, no. 11, pp. 2579–2605, 2008.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ye Liu (153 papers)
  2. Huifang Li (9 papers)
  3. Chao Hu (24 papers)
  4. Shuang Luo (10 papers)
  5. Yan Luo (77 papers)
  6. Chang Wen Chen (58 papers)
Citations (22)