Heterogeneous Network Based Contrastive Learning Method for PolSAR Land Cover Classification (2403.19902v2)
Abstract: Polarimetric synthetic aperture radar (PolSAR) image interpretation is widely used in various fields. Recently, deep learning has made significant progress in PolSAR image classification. Supervised learning (SL) requires a large amount of labeled PolSAR data with high quality to achieve better performance, however, manually labeled data is insufficient. This causes the SL to fail into overfitting and degrades its generalization performance. Furthermore, the scattering confusion problem is also a significant challenge that attracts more attention. To solve these problems, this article proposes a Heterogeneous Network based Contrastive Learning method(HCLNet). It aims to learn high-level representation from unlabeled PolSAR data for few-shot classification according to multi-features and superpixels. Beyond the conventional CL, HCLNet introduces the heterogeneous architecture for the first time to utilize heterogeneous PolSAR features better. And it develops two easy-to-use plugins to narrow the domain gap between optics and PolSAR, including feature filter and superpixel-based instance discrimination, which the former is used to enhance the complementarity of multi-features, and the latter is used to increase the diversity of negative samples. Experiments demonstrate the superiority of HCLNet on three widely used PolSAR benchmark datasets compared with state-of-the-art methods. Ablation studies also verify the importance of each component. Besides, this work has implications for how to efficiently utilize the multi-features of PolSAR data to learn better high-level representation in CL and how to construct networks suitable for PolSAR data better.
- W. Nie, K. Huang, J. Yang, and P. Li, “A deep reinforcement learning-based framework for polsar imagery classification,” IEEE Trans. Geosci. Remote Sens, vol. 60, pp. 1–15, 2022.
- H. Bi, F. Xu, Z. Wei, Y. Xue, and Z. Xu, “An active deep learning approach for minimally supervised polsar image classification,” IEEE Trans. Geosci. Remote Sens, vol. 57, no. 11, pp. 9378–9395, 2019.
- G. Hong, S. Wang, J. Li, and J. Huang, “Fully polarimetric synthetic aperture radar (sar) processing for crop type identification,” Photogramm. Eng. Remote Sens., vol. 81, no. 2, pp. 109–117, 2015.
- F. T. Ulaby and C. Elachi, “Radar polaritnetry for geoscience applications,” 1990.
- R. Shirvany, M. Chabert, and J.-Y. Tourneret, “Ship and oil-spill detection using the degree of polarization in linear and hybrid/compact dual-pol sar,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 5, no. 3, pp. 885–892, 2012.
- A. Freeman and S. L. Durden, “A three-component scattering model for polarimetric sar data,” IEEE Trans. Geosci. Remote Sens., vol. 36, no. 3, pp. 963–973, 1998.
- S. R. Cloude and E. Pottier, “An entropy based classification scheme for land applications of polarimetric sar,” IEEE Trans. Geosci. Remote Sens., vol. 35, no. 1, pp. 68–78, 1997.
- E. Pottier, “Dr. jr huynen’s main contributions in the development of polarimetric radar techniques and how the’radar targets phenomenological concept’becomes a theory,” in Proc. SPIE, vol. 1748. SPIE, 1993, pp. 72–85.
- J. R. Huynen, “Physical reality of radar targets,” in Proc. SPIE, vol. 1748. SPIE, 1993, pp. 86–96.
- W. L. Cameron and L. K. Leung, “Feature motivated polarization scattering matrix decomposition,” in Proc. IEEE Int. Conf. Radar. IEEE, 1990, pp. 549–557.
- E. Krogager, “New decomposition of the radar target scattering matrix,” Electron. Lett., vol. 18, no. 26, pp. 1525–1527, 1990.
- C. Lardeux, P.-L. Frison, C. Tison, J.-C. Souyris, B. Stoll, B. Fruneau, and J.-P. Rudant, “Support vector machine for multifrequency sar polarimetric data classification,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 12, pp. 4143–4152, 2009.
- B. Zou, H. Li, and L. Zhang, “Polsar image classification using bp neural network based on quantum clonal evolutionary algorithm,” in 2010 IEEE Int. Geosci. Remote Sens. Symp. (IGARSS). IEEE, 2010, pp. 1573–1576.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Proc. Adv. Neural Inf. Process. Syst. (NIPS), vol. 60, no. 6, pp. 84–90, 2017.
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2015, pp. 1–9.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770–778.
- W. Zaremba, I. Sutskever, and O. Vinyals, “Recurrent neural network regularization,” arXiv preprint arXiv:1409.2329, 2014.
- K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “Lstm: A search space odyssey,” IEEE Trans. Neur. Net. Lear., vol. 28, no. 10, pp. 2222–2232, 2016.
- J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.3555, 2014.
- X. X. Zhu, D. Tuia, L. Mou, G.-S. Xia, L. Zhang, F. Xu, and F. Fraundorfer, “Deep learning in remote sensing: A comprehensive review and list of resources,” IEEE Geosc. Rem. Sen. M., vol. 5, no. 4, pp. 8–36, 2017.
- L. Zhang, L. Zhang, and B. Du, “Deep learning for remote sensing data: A technical tutorial on the state of the art,” IEEE Geosci. Remote Sens. Mag., vol. 4, no. 2, pp. 22–40, 2016.
- L. Ma, Y. Liu, X. Zhang, Y. Ye, G. Yin, and B. A. Johnson, “Deep learning in remote sensing applications: A meta-analysis and review,” ISPRS J. Photogramm., vol. 152, pp. 166–177, 2019.
- Z. Feng, L. Song, S. Yang, X. Zhang, and L. Jiao, “Cross-modal contrastive learning for remote sensing image classification,” IEEE Trans. Geosci. Remote Sens., 2023.
- X. Zhang, S. Yang, Z. Feng, L. Song, Y. Wei, and L. Jiao, “Triple contrastive representation learning for hyperspectral image classification with noisy labels,” IEEE Trans. Geosci. Remote Sens., 2023.
- Y. Zhou, H. Wang, F. Xu, and Y.-Q. Jin, “Polarimetric sar image classification using deep convolutional neural networks,” IEEE Geosci. Remote Sens., vol. 13, no. 12, pp. 1935–1939, 2016.
- Z. Zhang, H. Wang, F. Xu, and Y.-Q. Jin, “Complex-valued convolutional neural network and its application in polarimetric sar image classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 12, pp. 7177–7188, 2017.
- F. Liu, L. Jiao, B. Hou, and S. Yang, “Pol-sar image classification based on wishart dbn and local spatial information,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 6, pp. 3292–3308, 2016.
- A. G. Mullissa, C. Persello, and A. Stein, “Polsarnet: A deep fully convolutional network for polarimetric sar image classification,” IEEE J-Stars., vol. 12, no. 12, pp. 5300–5309, 2019.
- X. Tan, M. Li, P. Zhang, Y. Wu, and W. Song, “Complex-valued 3-d convolutional neural network for polsar image classification,” IEEE Geosci. Remote Sens., vol. 17, no. 6, pp. 1022–1026, 2019.
- J. Oh, G.-Y. Youm, and M. Kim, “Spam-net: A cnn-based sar target recognition network with pose angle marginalization learning,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 2, pp. 701–714, 2021.
- X. Cheng, X. He, M. Qiao, P. Li, P. Chang, T. Zhang, X. Guo, J. Wang, Z. Tian, and G. Zhou, “Multi-view graph convolutional network with spectral component decompose for remote sensing images classification,” IEEE Trans. Circuits Syst. Video Technol., pp. 1–1, 2022.
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 1, pp. 1929–1958, 2014.
- V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proc. 27th Int. Conf. Mach. Learn. (ICML), 2010, pp. 807–814.
- R. K. Srivastava, K. Greff, and J. Schmidhuber, “Highway networks,” arXiv preprint arXiv:1505.00387, 2015.
- L. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 11, pp. 4037–4058, 2020.
- Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 3733–3742.
- M. Ye, X. Zhang, P. C. Yuen, and S.-F. Chang, “Unsupervised embedding learning via invariant and spreading instance feature,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 6210–6219.
- A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
- Y. Tian, D. Krishnan, and P. Isola, “Contrastive multiview coding,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16. Springer, 2020, pp. 776–794.
- K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 9729–9738.
- T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Int. Conf. Mach. Learn. (ICML). PMLR, 2020, pp. 1597–1607.
- J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar et al., “Bootstrap your own latent-a new approach to self-supervised learning,” Proc. Adv. Neural Inf. Process. Syst. (NIPS), vol. 33, pp. 21 271–21 284, 2020.
- X. Chen and K. He, “Exploring simple siamese representation learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 15 750–15 758.
- B. Ren, Y. Zhao, B. Hou, J. Chanussot, and L. Jiao, “A mutual information-based self-supervised learning model for polsar land cover classification,” IEEE Trans. Geosci. Remote Sens, vol. 59, no. 11, pp. 9224–9237, 2021.
- L. Zhang, S. Zhang, B. Zou, and H. Dong, “Unsupervised deep representation learning and few-shot classification of polsar images,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–16, 2020.
- W. Zhang, Z. Pan, and Y. Hu, “Exploring polsar images representation via self-supervised learning and its application on few-shot classification,” IEEE Geosci. Remote Sens., vol. 19, pp. 1–5, 2022.
- Y. Cui, F. Liu, X. Liu, L. Li, and X. Qian, “Tcspanet: two-staged contrastive learning and sub-patch attention based network for polsar image classification,” ISPRS J. Photogramm. Remote Sens., vol. 14, no. 10, p. 2451, 2022.
- R. Zhang, P. Isola, and A. A. Efros, “Split-brain autoencoders: Unsupervised learning by cross-channel prediction,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 1058–1067.
- D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2015, pp. 2650–2658.
- C. Yang, B. Hou, B. Ren, Y. Hu, and L. Jiao, “Cnn-based polarimetric decomposition feature selection for polsar image classification,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 11, pp. 8796–8812, 2019.
- S. R. Cloude and E. Pottier, “A review of target decomposition theorems in radar polarimetry,” IEEE Trans. Geosci. Remote Sens., vol. 34, no. 2, pp. 498–518, 1996.
- Y. Yamaguchi, T. Moriyama, M. Ishido, and H. Yamada, “Four-component scattering model for polarimetric sar image decomposition,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 8, pp. 1699–1706, 2005.
- J. J. van Zyl, “Application of cloude’s target decomposition theorem to polarimetric imaging radar data,” in Radar polarimetry, vol. 1748. SPIE, 1993, pp. 184–191.
- L. Zhang, B. Zou, H. Cai, and Y. Zhang, “Multiple-component scattering model for polarimetric sar image decomposition,” IEEE Geosci. Remote Sens., vol. 5, no. 4, pp. 603–607, 2008.
- W. A. Holm and R. M. Barnes, “On radar polarization mixed target state decomposition techniques,” in Proc. IEEE Int. Conf. Radar. IEEE, 1988, pp. 249–254.
- A. Haddadi G, M. Reza Sahebi, and A. Mansourian, “Polarimetric sar feature selection using a genetic algorithm,” Canadian J. Remote Sens., vol. 37, no. 1, pp. 27–36, 2011.
- X. Huang and X. Nie, “Multi-view feature selection for polsar image classification via l2,1 sparsity regularization and manifold regularization,” IEEE T. Image Process., vol. 30, pp. 8607–8618, 2021.
- H. Dong, L. Zhang, D. Lu, and B. Zou, “Attention-based polarimetric feature selection convolutional network for polsar image classification,” IEEE Geosci. Remote S., vol. 19, pp. 1–5, 2020.
- R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “Slic superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2282, 2012.
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in Int. Conf. Mach. Learn. (ICML). PMLR, 2021, pp. 8748–8763.
- M. Yang, L. Jiao, F. Liu, B. Hou, S. Yang, Y. Zhang, and J. Wang, “Coarse-to-fine contrastive self-supervised feature learning for land-cover classification in sar images with limited labeled data,” IEEE T. Image Process., vol. 31, pp. 6502–6516, 2022.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” J. Mach. Learn. Res., vol. 9, no. 11, 2008.