MLVICX: Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning (2403.11504v1)
Abstract: Self-supervised learning (SSL) is potentially useful in reducing the need for manual annotation and making deep learning models accessible for medical image analysis tasks. By leveraging the representations learned from unlabeled data, self-supervised models perform well on tasks that require little to no fine-tuning. However, for medical images, like chest X-rays, which are characterized by complex anatomical structures and diverse clinical conditions, there arises a need for representation learning techniques that can encode fine-grained details while preserving the broader contextual information. In this context, we introduce MLVICX (Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning), an approach to capture rich representations in the form of embeddings from chest X-ray images. Central to our approach is a novel multi-level variance and covariance exploration strategy that empowers the model to detect diagnostically meaningful patterns while reducing redundancy effectively. By enhancing the variance and covariance of the learned embeddings, MLVICX promotes the retention of critical medical insights by adapting both global and local contextual details. We demonstrate the performance of MLVICX in advancing self-supervised chest X-ray representation learning through comprehensive experiments. The performance enhancements we observe across various downstream tasks highlight the significance of the proposed approach in enhancing the utility of chest X-ray embeddings for precision medical diagnosis and comprehensive image analysis. For pertaining, we used the NIH-Chest X-ray dataset, while for downstream tasks, we utilized NIH-Chest X-ray, Vinbig-CXR, RSNA pneumonia, and SIIM-ACR Pneumothorax datasets. Overall, we observe more than 3% performance gains over SOTA SSL approaches in various downstream tasks.
- X. Zhang, L. Han, T. Sobeih, L. Han, N. Dempsey, S. Lechareas, A. Tridente, H. Chen, S. White, and D. Zhang, “Cxr-net: a multitask deep learning network for explainable and accurate diagnosis of covid-19 pneumonia from chest x-ray images,” IEEE journal of biomedical and health informatics, vol. 27, no. 2, pp. 980–991, 2022.
- H. Wang, H. Jia, L. Lu, and Y. Xia, “Thorax-net: an attention regularized deep neural network for classification of thoracic diseases on chest radiography,” IEEE journal of biomedical and health informatics, vol. 24, no. 2, pp. 475–485, 2019.
- R. Krishnan, P. Rajpurkar, and E. J. Topol, “Self-supervised learning in medicine and healthcare,” Nature Biomedical Engineering, vol. 6, no. 12, pp. 1346–1352, 2022.
- X. Chen, X. Wang, K. Zhang, K.-M. Fung, T. C. Thai, K. Moore, R. S. Mannel, H. Liu, B. Zheng, and Y. Qiu, “Recent advances and clinical applications of deep learning in medical image analysis,” Medical Image Analysis, vol. 79, p. 102444, 2022.
- T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning. PMLR, 2020, pp. 1597–1607.
- K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
- J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar, B. Piot, K. Kavukcuoglu, R. Munos, and M. Valko, “Bootstrap your own latent: A new approach to self-supervised learning,” 2020.
- X. Chen and K. He, “Exploring simple siamese representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 15 750–15 758.
- H. Huang, R. Wu, Y. Li, and C. Peng, “Self-supervised transfer learning based on domain adaptation for benign-malignant lung nodule classification on thoracic ct,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 8, pp. 3860–3871, 2022.
- Z. Lu, J. Wang, Z. Li, S. Ying, J. Wang, J. Shi, and D. Shen, “Two-stage self-supervised cycle-consistency transformer network for reducing slice gap in mr images,” IEEE Journal of Biomedical and Health Informatics, 2023.
- X. Zhang, W. Xie, C. Huang, Y. Zhang, X. Chen, Q. Tian, and Y. Wang, “Self-supervised tumor segmentation with sim2real adaptation,” IEEE Journal of Biomedical and Health Informatics, pp. 1–13, 2023.
- H. Sowrirajan, J. Yang, A. Y. Ng, and P. Rajpurkar, “Moco pretraining improves representation and transferability of chest x-ray models,” in Medical Imaging with Deep Learning. PMLR, 2021, pp. 728–744.
- Y. N. T. Vu, R. Wang, N. Balachandar, C. Liu, A. Y. Ng, and P. Rajpurkar, “Medaug: Contrastive learning leveraging patient metadata improves representations for chest x-ray interpretation,” in Machine Learning for Healthcare Conference. PMLR, 2021, pp. 755–769.
- S. Azizi, B. Mustafa, F. Ryan, Z. Beaver, J. Freyberg, J. Deaton, A. Loh, A. Karthikesalingam, S. Kornblith, T. Chen et al., “Big self-supervised models advance medical image classification,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 3478–3488.
- Y. Xie, J. Zhang, Y. Xia, and Q. Wu, “Unimiss: Universal medical self-supervised learning via breaking dimensionality barrier,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXI. Springer, 2022, pp. 558–575.
- K. Chaitanya, E. Erdil, N. Karani, and E. Konukoglu, “Contrastive learning of global and local features for medical image segmentation with limited annotations,” Advances in Neural Information Processing Systems, vol. 33, pp. 12 546–12 558, 2020.
- A. Kalapos and B. Gyires-Tóth, “Self-supervised pretraining for 2d medical image segmentation,” in Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII. Springer, 2023, pp. 472–484.
- C. Ouyang, C. Biffi, C. Chen, T. Kart, H. Qiu, and D. Rueckert, “Self-supervision with superpixels: Training few-shot medical image segmentation without annotation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16. Springer, 2020, pp. 762–780.
- L. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 11, pp. 4037–4058, 2020.
- Y. Tian, L. Yu, X. Chen, and S. Ganguli, “Understanding self-supervised learning with dual deep networks,” arXiv preprint arXiv:2010.00578, 2020.
- Y. Tian, X. Chen, and S. Ganguli, “Understanding self-supervised learning dynamics without contrastive pairs,” in International Conference on Machine Learning. PMLR, 2021, pp. 10 268–10 278.
- T. Chen, S. Kornblith, K. Swersky, M. Norouzi, and G. E. Hinton, “Big self-supervised models are strong semi-supervised learners,” Advances in neural information processing systems, vol. 33, pp. 22 243–22 255, 2020.
- A. Bardes, J. Ponce, and Y. LeCun, “VICReg: Variance-invariance-covariance regularization for self-supervised learning,” in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=xm6YD62D1Ub
- N. Woznitza, K. Piper, S. Burke, and G. Bothamley, “Chest x-ray interpretation by radiographers is not inferior to radiologists: a multireader, multicase comparison using jafroc (jack-knife alternative free-response receiver operating characteristics) analysis,” Academic Radiology, vol. 25, no. 12, pp. 1556–1563, 2018.
- A. Majkowska, S. Mittal, D. F. Steiner, J. J. Reicher, S. M. McKinney, G. E. Duggan, K. Eswaran, P.-H. Cameron Chen, Y. Liu, S. R. Kalidindi et al., “Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation,” Radiology, vol. 294, no. 2, pp. 421–431, 2020.
- M. A. Ponti, L. S. F. Ribeiro, T. S. Nazare, T. Bui, and J. Collomosse, “Everything you wanted to know about deep learning for computer vision but were afraid to ask,” in 2017 30th SIBGRAPI conference on graphics, patterns and images tutorials (SIBGRAPI-T). IEEE, 2017, pp. 17–41.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
- X. Wang, R. Zhang, C. Shen, T. Kong, and L. Li, “Dense contrastive learning for self-supervised visual pre-training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3024–3033.
- S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19.
- Z. Xie, Y. Lin, Z. Zhang, Y. Cao, S. Lin, and H. Hu, “Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 684–16 693.
- H.-Y. Zhou, C. Lu, C. Chen, S. Yang, and Y. Yu, “A unified visual information preservation framework for self-supervised pre-training in medical image analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
- H.-Y. Zhou, C. Lu, S. Yang, X. Han, and Y. Yu, “Preservational learning improves self-supervised medical image models by reconstructing diverse contexts,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3499–3509.
- A. Kaku, S. Upadhya, and N. Razavian, “Intermediate layers matter in momentum contrastive self supervised learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 24 063–24 074, 2021.
- J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins: Self-supervised learning via redundancy reduction,” in International Conference on Machine Learning. PMLR, 2021, pp. 12 310–12 320.
- D. Alexey, P. Fischer, J. Tobias, M. R. Springenberg, and T. Brox, “Discriminative unsupervised feature learning with exemplar convolutional neural networks,” IEEE TPAMI, vol. 38, no. 9, pp. 1734–1747, 2016.
- X. Zhuang, Y. Li, Y. Hu, K. Ma, Y. Yang, and Y. Zheng, “Self-supervised feature learning for 3d medical images by playing a rubik’s cube,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2019, pp. 420–428.
- J. Zhu, Y. Li, Y. Hu, K. Ma, S. K. Zhou, and Y. Zheng, “Rubik’s cube+: A self-supervised feature learning framework for 3d medical image analysis,” Medical image analysis, vol. 64, p. 101746, 2020.
- S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” in ICLR 2018, 2018.
- M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by solving jigsaw puzzles,” in European conference on computer vision. Springer, 2016, pp. 69–84.
- R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14. Springer, 2016, pp. 649–666.
- M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin, “Unsupervised learning of visual features by contrasting cluster assignments,” Advances in Neural Information Processing Systems, vol. 33, pp. 9912–9924, 2020.
- M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for unsupervised learning of visual features,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 132–149.
- X. Zhan, J. Xie, Z. Liu, Y.-S. Ong, and C. C. Loy, “Online deep clustering for unsupervised representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 6688–6697.
- G. Parmar, D. Li, K. Lee, and Z. Tu, “Dual contradistinctive generative autoencoder,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 823–832.
- M. Chen, A. Radford, R. Child, J. Wu, H. Jun, D. Luan, and I. Sutskever, “Generative pretraining from pixels,” in International conference on machine learning. PMLR, 2020, pp. 1691–1703.
- F. Haghighi, M. R. H. Taher, M. B. Gotway, and J. Liang, “Dira: Discriminative, restorative, and adversarial learning for self-supervised medical image analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 824–20 834.
- X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2097–2106.
- H. Q. Nguyen, K. Lam, L. T. Le, H. H. Pham, D. Q. Tran, D. B. Nguyen, D. D. Le, C. M. Pham, H. T. Tong, D. H. Dinh et al., “Vindr-cxr: An open dataset of chest x-rays with radiologist’s annotations,” Scientific Data, vol. 9, no. 1, p. 429, 2022.
- “Society for imaging informatics in medicine: Siim-acr pneumothorax segmentation,” 2019. [Online]. Available: https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/overview/description
- G. Shih, C. C. Wu, S. S. Halabi, M. D. Kohli, L. M. Prevedello, T. S. Cook, A. Sharma, J. K. Amorosa, V. Arteaga, M. Galperin-Aizenberg et al., “Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia,” Radiology: Artificial Intelligence, vol. 1, no. 1, p. e180041, 2019.
- Azad Singh (6 papers)
- Vandan Gorade (11 papers)
- Deepak Mishra (78 papers)