Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Supervised Contrastive Vision Transformer for Breast Histopathological Image Classification (2404.11052v2)

Published 17 Apr 2024 in cs.CV and cs.LG

Abstract: Invasive ductal carcinoma (IDC) is the most prevalent form of breast cancer. Breast tissue histopathological examination is critical in diagnosing and classifying breast cancer. Although existing methods have shown promising results, there is still room for improvement in the classification accuracy and generalization of IDC using histopathology images. We present a novel approach, Supervised Contrastive Vision Transformer (SupCon-ViT), for improving the classification of invasive ductal carcinoma in terms of accuracy and generalization by leveraging the inherent strengths and advantages of both transfer learning, i.e., pre-trained vision transformer, and supervised contrastive learning. Our results on a benchmark breast cancer dataset demonstrate that SupCon-Vit achieves state-of-the-art performance in IDC classification, with an F1-score of 0.8188, precision of 0.7692, and specificity of 0.8971, outperforming existing methods. In addition, the proposed model demonstrates resilience in scenarios with minimal labeled data, making it highly efficient in real-world clinical settings where labelled data is limited. Our findings suggest that supervised contrastive learning in conjunction with pre-trained vision transformers appears to be a viable strategy for an accurate classification of IDC, thus paving the way for a more efficient and reliable diagnosis of breast cancer through histopathological image analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. R. L. Siegel, K. D. Miller, N. S. Wagle, and A. Jemal, “Cancer statistics, 2023,” CA: a cancer journal for clinicians, vol. 73, no. 1, pp. 17–48, 2023.
  2. M. N. Gurcan, L. E. Boucheron, A. Can, A. Madabhushi, N. M. Rajpoot, and B. Yener, “Histopathological image analysis: A review,” IEEE reviews in biomedical engineering, vol. 2, pp. 147–171, 2009.
  3. A. Janowczyk and A. Madabhushi, “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases,” Journal of pathology informatics, vol. 7, no. 1, p. 29, 2016.
  4. P. T. Mooney, “Breast histopathology images,” https://www.kaggle.com/paultimothymooney/breast-histopathology-images, 2017, accessed: 26 January 2023.
  5. A. Cruz-Roa, A. Basavanhally, F. González, H. Gilmore, M. Feldman, S. Ganesan, N. Shih, J. Tomaszewski, and A. Madabhushi, “Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks,” in Medical Imaging 2014: Digital Pathology, vol. 9041.   SPIE, 2014, p. 904103.
  6. R. E. Shawi, K. Kilanava, and S. Sakr, “An interpretable semi-supervised framework for patch-based classification of breast cancer,” Scientific Reports, vol. 12, no. 1, p. 16734, 2022.
  7. T. Araújo, G. Aresta, E. Castro, J. Rouco, P. Aguiar, C. Eloy, A. Polónia, and A. Campilho, “Classification of breast cancer histology images using convolutional neural networks,” PloS one, vol. 12, no. 6, p. e0177544, 2017.
  8. M. Amrane, S. Oukid, I. Gagaoua, and T. Ensari, “Breast cancer classification using machine learning,” in 2018 electric electronics, computer science, biomedical engineerings’ meeting (EBBT).   IEEE, 2018, pp. 1–4.
  9. E. Michael, H. Ma, H. Li, and S. Qi, “An optimized framework for breast cancer classification using machine learning,” BioMed Research International, vol. 2022, 2022.
  10. V. R. Allugunti, “Breast cancer detection based on thermographic images using machine learning and deep learning algorithms,” International Journal of Engineering in Computer Science, vol. 4, no. 1, pp. 49–56, 2022.
  11. P. Wang, X. Hu, Y. Li, Q. Liu, and X. Zhu, “Automatic cell nuclei segmentation and classification of breast cancer histopathology images,” Signal Processing, vol. 122, pp. 1–13, 2016.
  12. F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol. 109, no. 1, pp. 43–76, 2020.
  13. E. Chaves, C. B. Gonçalves, M. K. Albertini, S. Lee, G. Jeon, and H. C. Fernandes, “Evaluation of transfer learning of pre-trained cnns applied to breast cancer detection on infrared images,” Applied optics, vol. 59, no. 17, pp. E23–E28, 2020.
  14. C. B. Gonçalves, J. R. Souza, and H. Fernandes, “Cnn architecture optimization using bio-inspired algorithms for breast cancer detection in infrared images,” Computers in Biology and Medicine, vol. 142, p. 105205, 2022.
  15. M. A. Farooq and P. Corcoran, “Infrared imaging for human thermography and breast tumor classification using thermal images,” in 2020 31st Irish Signals and Systems Conference (ISSC).   IEEE, 2020, pp. 1–6.
  16. Ç. Cabıoğlu and H. Oğul, “Computer-aided breast cancer diagnosis from thermal images using transfer learning,” in Bioinformatics and Biomedical Engineering: 8th International Work-Conference, IWBBIO 2020, Granada, Spain, May 6–8, 2020, Proceedings 8.   Springer, 2020, pp. 716–726.
  17. R. Roslidar, K. Saddami, F. Arnia, M. Syukri, and K. Munadi, “A study of fine-tuning cnn models based on thermal imaging for breast cancer classification,” in 2019 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom).   IEEE, 2019, pp. 77–81.
  18. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  19. M. Usman, T. Zia, and A. Tariq, “Analyzing transfer learning of vision transformers for interpreting chest radiography,” Journal of digital imaging, vol. 35, no. 6, pp. 1445–1462, 2022.
  20. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations. icml,” arXiv preprint arXiv:2002.05709, 2020.
  21. P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” Advances in neural information processing systems, vol. 33, pp. 18 661–18 673, 2020.
  22. I. T. Jolliffe, “Principal component analysis and factor analysis,” Principal component analysis, pp. 115–128, 1986.
  23. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
  24. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  25. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  26. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  27. A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan et al., “Searching for mobilenetv3,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314–1324.
  28. E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation strategies from data,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 113–123.
  29. E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 702–703.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com