Semi-Supervised Learning for hyperspectral images by non parametrically predicting view assignment (2306.10955v1)
Abstract: Hyperspectral image (HSI) classification is gaining a lot of momentum in present time because of high inherent spectral information within the images. However, these images suffer from the problem of curse of dimensionality and usually require a large number samples for tasks such as classification, especially in supervised setting. Recently, to effectively train the deep learning models with minimal labelled samples, the unlabeled samples are also being leveraged in self-supervised and semi-supervised setting. In this work, we leverage the idea of semi-supervised learning to assist the discriminative self-supervised pretraining of the models. The proposed method takes different augmented views of the unlabeled samples as input and assigns them the same pseudo-label corresponding to the labelled sample from the downstream task. We train our model on two HSI datasets, namely Houston dataset (from data fusion contest, 2013) and Pavia university dataset, and show that the proposed approach performs better than self-supervised approach and supervised training.
- “Unsupervised spectral–spatial feature learning via deep residual conv–deconv network for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 1, pp. 391–406, 2017.
- “Self supervised learning for few shot hyperspectral image classification,” in IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2022, pp. 267–270.
- “ROBYOL: Random-occlusion-based BYOL for hyperspectral image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
- “Semi-supervised learning of visual features by non-parametrically predicting view assignments with support samples,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8443–8452.
- “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016.
- “Learning a nonlinear embedding by preserving class neighbourhood structure,” in Artificial intelligence and statistics. PMLR, 2007, pp. 412–419.
- “Two headed dragons: Multimodal fusion and cross modal transactions,” in 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021, pp. 2893–2897.
- “Large batch optimization for deep learning: Training BERT in 76 minutes,” arXiv preprint arXiv:1904.00962, 2019.