Exploring Self-Supervised Representation Learning For Low-Resource Medical Image Analysis (2303.02245v2)
Abstract: The success of self-supervised learning (SSL) has mostly been attributed to the availability of unlabeled yet large-scale datasets. However, in a specialized domain such as medical imaging which is a lot different from natural images, the assumption of data availability is unrealistic and impractical, as the data itself is scanty and found in small databases, collected for specific prognosis tasks. To this end, we seek to investigate the applicability of self-supervised learning algorithms on small-scale medical imaging datasets. In particular, we evaluate $4$ state-of-the-art SSL methods on three publicly accessible \emph{small} medical imaging datasets. Our investigation reveals that in-domain low-resource SSL pre-training can yield competitive performance to transfer learning from large-scale datasets (such as ImageNet). Furthermore, we extensively analyse our empirical findings to provide valuable insights that can motivate for further research towards circumventing the need for pre-training on a large image corpus. To the best of our knowledge, this is the first attempt to holistically explore self-supervision on low-resource medical datasets.
- “A simple framework for contrastive learning of visual representations,” in ICML, 2020.
- “Exploring simple siamese representation learning,” in CVPR, 2021.
- “Vicreg: Variance-invariance-covariance regularization for self-supervised learning,” in ICLR, 2022.
- “Decoupled contrastive learning,” in ECCV, 2022.
- “Swis: Self-supervised representation learning for offline signature verification,” in IEEE ICIP, 2022.
- “Imagenet: A large-scale hierarchical image database,” in CVPR, 2009.
- “Self-supervised representation learning for detection of acl tear injury in knee mr videos,” PRL, 2022.
- “Big self-supervised models advance medical image classification,” in ICCV, 2021.
- “Addressing class imbalance in semi-supervised image segmentation: A study on cardiac mri,” in MICCAI, 2022.
- “Intriguing properties of contrastive losses,” in NeurIPS, 2021.
- “Self-supervised learning methods and applications in medical imaging analysis: A survey,” PeerJ Computer Science, 2022.
- “Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis,” MedIA, 2019.
- “Self-supervised learning for medical image analysis using image context restoration,” MedIA, 2019.
- “Moco pretraining improves representation and transferability of chest x-ray models,” in MIDL, 2021.
- “Self-supervised learning from 100 million medical images,” arXiv preprint arXiv:2201.01283, 2022.
- “Extending and analyzing self-supervised learning across domains,” in ECCV, 2020.
- “Rethinking self-supervised learning: Small is beautiful,” arXiv preprint arXiv:2103.13559, 2021.
- “Context encoders: Feature learning by inpainting,” in CVPR, 2016.
- “Unsupervised learning of visual representations by solving jigsaw puzzles,” in ECCV, 2016.
- “Unsupervised feature learning via non-parametric instance discrimination,” in CVPR, 2018.
- “A dataset for breast cancer histopathological image classification,” IEEE TBE, 2015.
- “Multi-class texture analysis in colorectal cancer histology,” Scientific Reports, 2016.
- Daniel Kermany et al., “Labeled optical coherence tomography (oct) and chest x-ray images for classification,” http://doi.org/10.17632/rscbjbr9sj.2, 2018.
- “A survey on contrastive self-supervised learning,” Technologies, MDPI, 2020.
- “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020.
- Kihyuk Sohn, “Improved deep metric learning with multi-class n-pair loss objective,” in NeurIPS, 2016.
- “Combined scaling for open-vocabulary image classification,” arXiv e-prints, pp. arXiv–2111, 2021.
- “Pytorch: An imperative style, high-performance deep learning library,” in NeurIPS, 2019.
- “Self-supervised learning is more robust to dataset imbalance,” in ICLR, 2022.
- Soumitri Chattopadhyay (15 papers)
- Soham Ganguly (2 papers)
- Sreejit Chaudhury (2 papers)
- Sayan Nag (38 papers)
- Samiran Chattopadhyay (23 papers)