Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmenting Prototype Network with TransMix for Few-shot Hyperspectral Image Classification (2401.11724v1)

Published 22 Jan 2024 in cs.CV and cs.AI

Abstract: Few-shot hyperspectral image classification aims to identify the classes of each pixel in the images by only marking few of these pixels. And in order to obtain the spatial-spectral joint features of each pixel, the fixed-size patches centering around each pixel are often used for classification. However, observing the classification results of existing methods, we found that boundary patches corresponding to the pixels which are located at the boundary of the objects in the hyperspectral images, are hard to classify. These boundary patchs are mixed with multi-class spectral information. Inspired by this, we propose to augment the prototype network with TransMix for few-shot hyperspectrial image classification(APNT). While taking the prototype network as the backbone, it adopts the transformer as feature extractor to learn the pixel-to-pixel relation and pay different attentions to different pixels. At the same time, instead of directly using the patches which are cut from the hyperspectral images for training, it randomly mixs up two patches to imitate the boundary patches and uses the synthetic patches to train the model, with the aim to enlarge the number of hard training samples and enhance their diversity. And by following the data agumentation technique TransMix, the attention returned by the transformer is also used to mix up the labels of two patches to generate better labels for synthetic patches. Compared with existing methods, the proposed method has demonstrated sate of the art performance and better robustness for few-shot hyperspectral image classification in our experiments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. I. B. Strachan, E. Pattey, and J. B. Boisvert, “Impact of nitrogen and environmental conditions on corn as detected by hyperspectral reflectance,” Remote Sensing of environment, vol. 80, no. 2, pp. 213–224, 2002.
  2. S. Chabrillat, R. Milewski, T. Schmid, M. Rodriguez, P. Escribano, M. Pelayo, and A. Palacios-Orueta, “Potential of hyperspectral imagery for the spatial assessment of soil erosion stages in agricultural semi-arid spain at different scales,” in 2014 IEEE Geoscience and Remote Sensing Symposium.   IEEE, 2014, pp. 2918–2921.
  3. P. Kuflik and S. R. Rotman, “Band selection for gas detection in hyperspectral images,” in 2012 IEEE 27th Convention of Electrical and Electronics Engineers in Israel.   IEEE, 2012, pp. 1–4.
  4. F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Transactions on geoscience and remote sensing, vol. 42, no. 8, pp. 1778–1790, 2004.
  5. J. Li, J. M. Bioucas-Dias, and A. Plaza, “Semisupervised hyperspectral image classification using soft sparse multinomial logistic regression,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 2, pp. 318–322, 2012.
  6. N. Falco, J. A. Benediktsson, and L. Bruzzone, “Spectral and spatial classification of hyperspectral images based on ica and reduced morphological attribute profiles,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 11, pp. 6223–6240, 2015.
  7. Y. Qian, M. Ye, and J. Zhou, “Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4, pp. 2276–2291, 2012.
  8. S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Deep learning for hyperspectral image classification: An overview,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6690–6709, 2019.
  9. M. Paoletti, J. Haut, J. Plaza, and A. Plaza, “Deep learning classifiers for hyperspectral imaging: A review,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 158, pp. 279–317, 2019.
  10. S. Yu, S. Jia, and C. Xu, “Convolutional neural networks for hyperspectral image classification,” Neurocomputing, vol. 219, pp. 88–98, 2017.
  11. H. Zhang, Y. Li, Y. Zhang, and Q. Shen, “Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network,” Remote sensing letters, vol. 8, no. 5, pp. 438–447, 2017.
  12. F. Zhou, R. Hang, Q. Liu, and X. Yuan, “Hyperspectral image classification using spectral-spatial lstms,” Neurocomputing, vol. 328, pp. 39–47, 2019.
  13. D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, and J. Chanussot, “Graph convolutional networks for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 7, pp. 5966–5978, 2020.
  14. B. Fang, Y. Li, H. Zhang, and J. C.-W. Chan, “Collaborative learning of lightweight convolutional neural network and deep clustering for hyperspectral image semi-supervised classification with limited training samples,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 161, pp. 164–178, 2020.
  15. C. Shi and C.-M. Pun, “Multi-scale hierarchical recurrent neural networks for hyperspectral image classification,” Neurocomputing, vol. 294, pp. 82–93, 2018.
  16. J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” Advances in neural information processing systems, vol. 30, 2017.
  17. B. Zhang, X. Li, Y. Ye, Z. Huang, and L. Zhang, “Prototype completion with primitive knowledge for few-shot learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3754–3762.
  18. K. J. Liang, S. B. Rangrej, V. Petrovic, and T. Hassner, “Few-shot learning with noisy labels,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 9089–9098.
  19. Y. Liu, W. Zhang, C. Xiang, T. Zheng, D. Cai, and X. He, “Learning to affiliate: Mutual centralized learning for few-shot classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 411–14 420.
  20. C. Doersch, A. Gupta, and A. Zisserman, “Crosstransformers: spatially-aware few-shot transfer,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 981–21 993, 2020.
  21. A. Li, W. Huang, X. Lan, J. Feng, Z. Li, and L. Wang, “Boosting few-shot learning with adaptive margin loss,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 12 576–12 584.
  22. B. Liu, X. Yu, A. Yu, P. Zhang, G. Wan, and R. Wang, “Deep few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 4, pp. 2290–2304, 2018.
  23. H. Tang, Y. Li, X. Han, Q. Huang, and W. Xie, “A spatial–spectral prototypical network for hyperspectral remote sensing image,” IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 1, pp. 167–171, 2019.
  24. J. Sun, X. Shen, and Q. Sun, “Hyperspectral image few-shot classification network based on the earth mover’s distance,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2022.
  25. X. Ma, S. Ji, J. Wang, J. Geng, and H. Wang, “Hyperspectral image classification based on two-phase relation learning network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 12, pp. 10 398–10 409, 2019.
  26. F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1199–1208.
  27. Z. Li, M. Liu, Y. Chen, Y. Xu, W. Li, and Q. Du, “Deep cross-domain few-shot learning for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–18, 2022.
  28. J. Bai, S. Huang, Z. Xiao, X. Li, Y. Zhu, A. C. Regan, and L. Jiao, “Few-shot hyperspectral image classification based on adaptive subspaces and feature transformation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–17, 2022.
  29. B. Xi, J. Li, Y. Li, R. Song, D. Hong, and J. Chanussot, “Few-shot learning with class-covariance metric for hyperspectral image classification,” IEEE Transactions on Image Processing, vol. 31, pp. 5079–5092, 2022.
  30. Y. Zhang, W. Li, M. Zhang, S. Wang, R. Tao, and Q. Du, “Graph information aggregation cross-domain few-shot learning for hyperspectral image classification,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  31. Q. Liu, J. Peng, Y. Ning, N. Chen, W. Sun, Q. Du, and Y. Zhou, “Refined prototypical contrastive learning for few-shot hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–14, 2023.
  32. C. Liu, L. Yang, Z. Li, W. Yang, Z. Han, J. Guo, and J. Yu, “Multi-view relation learning for cross-domain few-shot hyperspectral image classification,” arXiv preprint arXiv:2311.01212, 2023.
  33. Y. Wang, M. Liu, Y. Yang, Z. Li, Q. Du, Y. Chen, F. Li, and H. Yang, “Heterogeneous few-shot learning for hyperspectral image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2021.
  34. J.-N. Chen, S. Sun, J. He, P. H. Torr, A. Yuille, and S. Bai, “Transmix: Attend to mix for vision transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 135–12 144.
  35. S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6023–6032.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chun Liu (122 papers)
  2. Longwei Yang (4 papers)
  3. Dongmei Dong (1 paper)
  4. Zheng Li (326 papers)
  5. Wei Yang (349 papers)
  6. Zhigang Han (6 papers)
  7. Jiayao Wang (4 papers)

Summary

We haven't generated a summary for this paper yet.