Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-Supervised Relational Contrastive Learning (2304.05047v2)

Published 11 Apr 2023 in cs.CV

Abstract: Disease diagnosis from medical images via supervised learning is usually dependent on tedious, error-prone, and costly image labeling by medical experts. Alternatively, semi-supervised learning and self-supervised learning offer effectiveness through the acquisition of valuable insights from readily available unlabeled images. We present Semi-Supervised Relational Contrastive Learning (SRCL), a novel semi-supervised learning model that leverages self-supervised contrastive loss and sample relation consistency for the more meaningful and effective exploitation of unlabeled data. Our experimentation with the SRCL model explores both pre-train/fine-tune and joint learning of the pretext (contrastive learning) and downstream (diagnostic classification) tasks. We validate against the ISIC 2018 Challenge benchmark skin lesion classification dataset and demonstrate the effectiveness of our semi-supervised method on varying amounts of labeled data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Big self-supervised models advance medical image classification. arXiv preprint arXiv:2101.05224 (2021).
  2. Berger, J. O. Statistical Decision Theory and Bayesian Analysis, 2nd Edition. Springer Series in Statistics. Springer-Verlag, 1985.
  3. Contrastive learning of global and local features for medical image segmentation with limited annotations. CoRR abs/2006.10511 (2020).
  4. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006.
  5. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning (2020), pp. 1597–1607.
  6. Momentum contrastive learning for few-shot COVID-19 diagnosis from chest CT images. Pattern Recognition 113 (2021), 107826.
  7. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1902.03368 (2019).
  8. Multi-task self-supervised visual learning. In 2017 IEEE International Conference on Computer Vision (ICCV) (2017), pp. 2070–2079.
  9. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 9729–9738.
  10. Self-supervised pretraining with DICOM metadata in ultrasound imaging. In Proceedings of the 5th Machine Learning for Healthcare Conference (2020), pp. 732–749.
  11. Imran, A.-A.-Z. From Fully-Supervised, Single-Task to Scarcely-Supervised, Multi-Task Deep Learning for Medical Image Analysis. PhD thesis, Computer Science Department, University of California, Los Angeles, 2020.
  12. Self-supervised, semi-supervised, multi-context learning for the combined classification and segmentation of medical images. In Proceedings of the AAAI Conference on Artificial Intelligence (2020), vol. 34, pp. 13815–13816.
  13. Multimodal contrastive learning for prospective personalized estimation of CT organ dose. In International Conference on Medical Image Computing and Computer-Assisted Intervention (2022), pp. 634–643.
  14. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 11 (2020), 4037–4058.
  15. Supervised contrastive learning. arXiv preprint arXiv:2004.11362 (2020).
  16. Skin lesion classification with ensemble of squeeze-and-excitation networks and semi-supervised learning. CoRR abs/1809.02568 (2018).
  17. Temporal ensembling for semi-supervised learning. CoRR abs/1610.02242 (2016).
  18. A weakly supervised consistency-based learning method for COVID-19 segmentation in CT images. arXiv preprint arXiv:2007.02180 (2020).
  19. Semi-supervised learning regularized by adversarial perturbation and diversity maximization. In Machine Learning in Medical Imaging (Cham, 2021), C. Lian, X. Cao, I. Rekik, X. Xu, and P. Yan, Eds., Springer International Publishing, pp. 199–208.
  20. Federated semi-supervised medical image classification via inter-client relation matching. CoRR abs/2106.08600 (2021).
  21. Semi-supervised medical image classification with relation-driven self-ensembling model. IEEE Transactions on Medical Imaging 39, 11 (2020), 3429–3440.
  22. Self-supervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering 35 (2021), 857–876.
  23. Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115, 3 (2015), 211–252.
  24. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780 (2017).
  25. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning (2020), pp. 9929–9939.
  26. Federated contrastive learning for volumetric medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (2021), Springer, pp. 367–377.
  27. What should not be contrastive in contrastive learning. CoRR abs/2008.05659 (2020).
  28. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision (2019), pp. 6023–6032.
  29. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (2017).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
Citations (1)