Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Breast Cancer Classification with Enhanced Interpretability: DALAResNet50 and DT Grad-CAM (2308.13150v16)

Published 25 Aug 2023 in eess.IV, cs.CV, and cs.LG

Abstract: Automatic classification of breast cancer in histopathology images is crucial for accurate diagnosis and effective treatment planning. Recently, classification methods based on the ResNet architecture have gained prominence due to their ability to improve accuracy significantly. This is achieved by employing skip connections to mitigate vanishing gradient issues, enabling the integration of low-level and high-level feature information. However, the conventional ResNet architecture faces challenges such as data imbalance and limited interpretability, which necessitate cross-domain knowledge and collaboration among medical experts. To address these challenges, this study proposes a novel method for breast cancer classification: the Dual-Activated Lightweight Attention ResNet50 (DALAResNet50) model. This model integrates a pre-trained ResNet50 architecture with a lightweight attention mechanism, embedding an attention module in the fourth layer of ResNet50, and incorporates two fully connected layers with LeakyReLU and ReLU activation functions to enhance feature learning capabilities. Extensive experiments conducted on the BreakHis, BACH, and Mini-DDSM datasets demonstrate that DALAResNet50 outperforms state-of-the-art models in accuracy, F1 score, IBA, and GMean, particularly excelling in classification tasks involving imbalanced datasets. Furthermore, the proposed Dynamic Threshold Grad-CAM (DT Grad-CAM) method provides clearer and more focused visualizations, enhancing interpretability and assisting medical experts in identifying key features.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. S. Kharya, “Using data mining techniques for diagnosis and prognosis of cancer disease,” arXiv preprint arXiv:1205.1923, 2012.
  2. M. I. Razzak, S. Naz, and A. Zaib, “Deep learning for medical image processing: Overview, challenges and the future,” Classification in BioApps: Automation of Decision Making, pp. 323–350, 2018.
  3. L. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. A. Fadhel, M. Al-Amidie, and L. Farhan, “Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions,” Journal of big Data, vol. 8, pp. 1–74, 2021.
  4. R. A. Dar, M. Rasool, A. Assad et al., “Breast cancer detection using deep learning: Datasets, methods, and challenges ahead,” Computers in biology and medicine, p. 106073, 2022.
  5. N. S. Ismail and C. Sovuthy, “Breast cancer detection based on deep learning technique,” in 2019 International UNIMAS STEM 12th engineering conference (EnCon).   IEEE, 2019, pp. 89–92.
  6. S. S. Koshy, L. J. Anbarasi, M. Jawahar, and V. Ravi, “Breast cancer image analysis using deep learning techniques–a survey,” Health and Technology, vol. 12, no. 6, pp. 1133–1155, 2022.
  7. V. O. Adedayo-Ajayi, R. O. Ogundokun, A. E. Tunbosun, M. O. Adebiyi, and A. A. Adebiyi, “Metastatic breast cancer detection using deep learning algorithms: A systematic review,” in 2023 International Conference on Science, Engineering and Business for Sustainable Development Goals (SEB-SDG), vol. 1.   IEEE, 2023, pp. 1–5.
  8. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  9. J. Xu, Z. Li, B. Du, M. Zhang, and J. Liu, “Reluplex made more practical: Leaky relu,” in 2020 IEEE Symposium on Computers and communications (ISCC).   IEEE, 2020, pp. 1–7.
  10. M. Goyal, R. Goyal, and B. Lall, “Learning activation functions: A new paradigm for understanding neural networks,” arXiv preprint arXiv:1906.09529, 2019.
  11. H. Zhou, Z. Liu, T. Li, Y. Chen, W. Huang, and Z. Zhang, “Classification of precancerous lesions based on fusion of multiple hierarchical features,” Computer methods and programs in biomedicine, vol. 229, p. 107301, 2023.
  12. S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan, “Medical image analysis using convolutional neural networks: a review,” Journal of medical systems, vol. 42, pp. 1–13, 2018.
  13. Y. Yu and F. Liu, “Dense connectivity based two-stream deep feature fusion framework for aerial scene classification,” Remote Sensing, vol. 10, no. 7, p. 1158, 2018.
  14. Z. Chen, Y. Song, Y. Ma, G. Li, R. Wang, and H. Hu, “Interaction in transformer for change detection in vhr remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
  15. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  16. E. L. Omonigho, M. David, A. Adejo, and S. Aliyu, “Breast cancer: tumor detection in mammogram images using modified alexnet deep convolution neural network,” in 2020 international conference in mathematics, computer engineering and computer science (ICMCECS).   IEEE, 2020, pp. 1–6.
  17. M. S. Nazir, U. G. Khan, A. Mohiyuddin, M. S. Al Reshan, A. Shaikh, M. Rizwan, and M. Davidekova, “A novel cnn-inception-v4-based hybrid approach for classification of breast cancer in mammogram images,” Wireless Communications and Mobile Computing, vol. 2022, pp. 1–10, 2022.
  18. I. Nedjar, M. Brahimi, S. Mahmoudi, K. A. Ayad, and M. A. Chikh, “Exploring regions of interest: Visualizing histological image classification for breast cancer using deep learning,” arXiv preprint arXiv:2305.20058, 2023.
  19. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  20. Y. Jiménez Gaona, M. J. Rodriguez-Alvarez, H. Espino-Morato, D. Castillo Malla, and V. Lakshminarayanan, “Densenet for breast tumor classification in mammographic images,” in International Conference on Bioengineering and Biomedical Signal and Image Processing.   Springer, 2021, pp. 166–176.
  21. H. A. Khikani, N. Elazab, A. Elgarayhi, M. Elmogy, and M. Sallah, “Breast cancer classification based on histopathological images using a deep learning capsule network,” arXiv preprint arXiv:2208.00594, 2022.
  22. M. Alruwaili and W. Gouda, “Automated breast cancer detection models based on transfer learning,” Sensors, vol. 22, no. 3, p. 876, 2022.
  23. S. Arooj, M. Zubair, M. F. Khan, K. Alissa, M. A. Khan, A. Mosavi et al., “Breast cancer detection and classification empowered with transfer learning,” Frontiers in Public Health, vol. 10, p. 924432, 2022.
  24. V. Azevedo, C. Silva, and I. Dutra, “Quantum transfer learning for breast cancer detection,” Quantum Machine Intelligence, vol. 4, no. 1, p. 5, 2022.
  25. Y. Chen, C. Liu, W. Huang, S. Cheng, R. Arcucci, and Z. Xiong, “Generative text-guided 3d vision-language pretraining for unified medical image segmentation,” arXiv preprint arXiv:2306.04811, 2023.
  26. C. Liu, S. Cheng, C. Chen, M. Qiao, W. Zhang, A. Shah, W. Bai, and R. Arcucci, “M-flag: Medical vision-language pre-training with frozen language models and latent space geometry optimization,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2023, pp. 637–647.
  27. Z. Wan, C. Liu, M. Zhang, J. Fu, B. Wang, S. Cheng, L. Ma, C. Quilodrán-Casas, and R. Arcucci, “Med-unic: Unifying cross-lingual medical vision-language pre-training by diminishing bias,” arXiv preprint arXiv:2305.19894, 2023.
  28. C. Liu, S. Cheng, M. Shi, A. Shah, W. Bai, and R. Arcucci, “Imitate: Clinical prior guided hierarchical vision-language pre-training,” arXiv preprint arXiv:2310.07355, 2023.
  29. M. I. Mahmud, M. Mamun, and A. Abdelgawad, “A deep analysis of transfer learning based breast cancer detection using histopathology images,” in 2023 10th International Conference on Signal Processing and Integrated Networks (SPIN).   IEEE, 2023, pp. 198–204.
  30. G. Brauwers and F. Frasincar, “A general survey on attention mechanisms in deep learning,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  31. J. Kang, L. Liu, F. Zhang, C. Shen, N. Wang, and L. Shao, “Semantic segmentation model of cotton roots in-situ image based on attention mechanism,” Computers and Electronics in Agriculture, vol. 189, p. 106370, 2021.
  32. C. C. Ukwuoma, G. C. Urama, Z. Qin, M. B. B. Heyat, H. M. Khan, F. Akhtar, M. S. Masadeh, C. S. Ibegbulam, F. L. Delali, and O. AlShorman, “Boosting breast cancer classification from microscopic images using attention mechanism,” in 2022 International Conference on Decision Aid Sciences and Applications (DASA).   IEEE, 2022, pp. 258–264.
  33. J. Vardhan and G. S. Krishna, “Breast cancer segmentation using attention-based convolutional network and explainable ai,” arXiv preprint arXiv:2305.14389, 2023.
  34. B. Xu, J. Liu, X. Hou, B. Liu, J. Garibaldi, I. O. Ellis, A. Green, L. Shen, and G. Qiu, “Look, investigate, and classify: a deep hybrid attention method for breast cancer classification,” in 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019).   IEEE, 2019, pp. 914–918.
  35. C. Wang, F. Xiao, W. Zhang, S. Huang, W. Zhang, and P. Zou, “Transfer learning and attention mechanism for breast cancer classification,” in 2021 17th International Conference on Computational Intelligence and Security (CIS).   IEEE, 2021, pp. 75–79.
  36. F. A. Spanhol, L. S. Oliveira, C. Petitjean, and L. Heutte, “A dataset for breast cancer histopathological image classification,” Ieee transactions on biomedical engineering, vol. 63, no. 7, pp. 1455–1462, 2015.
  37. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  38. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  39. M. Saini and S. Susan, “Vggin-net: Deep transfer network for imbalanced breast cancer dataset,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 20, no. 1, pp. 752–762, 2022.
  40. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  41. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
  42. Facebook Research Team, “Dinov2: High-performance visual feature generation model,” [Online]. Available: https://github.com/facebookresearch/dinov2, 2023.
  43. C. Zhang, P. Soda, J. Bi, G. Fan, G. Almpanidis, S. García, and W. Ding, “An empirical study on the joint impact of feature selection and data resampling on imbalance classification,” Applied Intelligence, vol. 53, no. 5, pp. 5449–5461, 2023.
  44. B. Alsinglawi, O. Alshari, M. Alorjani, O. Mubin, F. Alnajjar, M. Novoa, and O. Darwish, “An explainable machine learning framework for lung cancer hospital length of stay prediction,” Scientific reports, vol. 12, no. 1, p. 607, 2022.
  45. S. Khan, M. Hayat, S. W. Zamir, J. Shen, and L. Shao, “Striking the right balance with uncertainty,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 103–112.
  46. D. Chicco and G. Jurman, “The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation,” BMC genomics, vol. 21, no. 1, pp. 1–13, 2020.
  47. U. R. Gogoi, G. Majumdar, M. K. Bhowmik, and A. K. Ghosh, “Evaluating the efficiency of infrared breast thermography for early breast cancer risk prediction in asymptomatic population,” Infrared Physics & Technology, vol. 99, pp. 201–211, 2019.
  48. I. M. De Diego, A. R. Redondo, R. R. Fernández, J. Navarro, and J. M. Moguerza, “General performance score for classification problems,” Applied Intelligence, vol. 52, no. 10, pp. 12 049–12 063, 2022.
  49. F. Maleki, K. Ovens, R. Gupta, C. Reinhold, A. Spatz, and R. Forghani, “Generalizability of machine learning models: Quantitative evaluation of three methodological pitfalls,” Radiology: Artificial Intelligence, vol. 5, no. 1, p. e220028, 2022.
  50. M. Saini and S. Susan, “Tackling class imbalance in computer vision: a contemporary review,” Artificial Intelligence Review, pp. 1–57, 2023.
  51. M. Hossin and M. N. Sulaiman, “A review on evaluation metrics for data classification evaluations,” International journal of data mining & knowledge management process, vol. 5, no. 2, p. 1, 2015.
  52. M. A. Bhuiyan, T. Crovella, A. Paiano, and H. Alves, “A review of research on tourism industry, economic crisis and mitigation process of the loss: Analysis on pre, during and post pandemic situation,” Sustainability, vol. 13, no. 18, p. 10314, 2021.
  53. X. Wei, Y. Chen, and Z. Zhang, “Comparative experiment of convolutional neural network (cnn) models based on pneumonia x-ray images detection,” in 2020 2nd International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI).   IEEE, 2020, pp. 449–454.
  54. A. S. Ravindran, M. Cestari, C. Malaya, I. John, G. E. Francisco, C. Layne, and J. L. C. Vidal, “Interpretable deep learning models for single trial prediction of balance loss,” in 2020 IEEE international conference on systems, man, and cybernetics (SMC).   IEEE, 2020, pp. 268–273.
  55. R. Aggarwal, V. Sounderajah, G. Martin, D. S. Ting, A. Karthikesalingam, D. King, H. Ashrafian, and A. Darzi, “Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis,” NPJ digital medicine, vol. 4, no. 1, p. 65, 2021.

Summary

We haven't generated a summary for this paper yet.