Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Progressive Class-Wise Attention (PCA) Approach for Diagnosing Skin Lesions (2306.07300v1)

Published 11 Jun 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Skin cancer holds the highest incidence rate among all cancers globally. The importance of early detection cannot be overstated, as late-stage cases can be lethal. Classifying skin lesions, however, presents several challenges due to the many variations they can exhibit, such as differences in colour, shape, and size, significant variation within the same class, and notable similarities between different classes. This paper introduces a novel class-wise attention technique that equally regards each class while unearthing more specific details about skin lesions. This attention mechanism is progressively used to amalgamate discriminative feature details from multiple scales. The introduced technique demonstrated impressive performance, surpassing more than 15 cutting-edge methods including the winners of HAM1000 and ISIC 2019 leaderboards. It achieved an impressive accuracy rate of 97.40% on the HAM10000 dataset and 94.9% on the ISIC 2019 dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (71)
  1. I. Razzak, G. Shoukat, S. Naz, and T. M. Khan, “Skin lesion analysis toward accurate detection of melanoma using multistage fully connected residual network,” in 2020 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2020, pp. 1–8.
  2. I. Razzak and S. Naz, “Unit-vise: Deep shallow unit-vise residual neural networks with transition layer for expert level skin cancer classification,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 19, no. 02, pp. 1225–1234, mar 2022.
  3. L. Siegel Rebecca, D. Miller Kimberly, E. Fuchs Hannah, and A. Jemal, “Cancer statistics, 2021,” CA Cancer J Clin, vol. 71, no. 1, pp. 7–33, 2021.
  4. J. Saeed and S. Zeebaree, “Skin lesion classification based on deep convolutional neural networks architectures,” Journal of Applied Science and Technology Trends, vol. 2, no. 01, pp. 41–51, 2021.
  5. E. Okur and M. Turkan, “A survey on automated melanoma detection,” Engineering Applications of Artificial Intelligence, vol. 73, pp. 50–67, 2018.
  6. M. Q. Khan, A. Hussain, S. U. Rehman, U. Khan, M. Maqsood, K. Mehmood, and M. A. Khan, “Classification of melanoma and nevus in digital images for diagnosis of skin cancer,” IEEE Access, vol. 7, pp. 90 132–90 144, 2019.
  7. M. Dildar, S. Akram, M. Irfan, H. U. Khan, M. Ramzan, A. R. Mahmood, S. A. Alsaiari, A. H. M. Saeed, M. O. Alraddadi, and M. H. Mahnashi, “Skin cancer detection: a review using deep learning techniques,” International journal of environmental research and public health, vol. 18, no. 10, p. 5479, 2021.
  8. M. Healsmith, J. Bourke, J. Osborne, and R. Graham-Brown, “An evaluation of the revised seven-point checklist for the early diagnosis of cutaneous malignant melanoma,” British Journal of Dermatology, vol. 130, no. 1, pp. 48–50, 1994.
  9. A. Mahbod, G. Schaefer, C. Wang, G. Dorffner, R. Ecker, and I. Ellinger, “Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification,” Computer methods and programs in biomedicine, vol. 193, p. 105475, 2020.
  10. A. Khawaja, T. M. Khan, K. Naveed, S. S. Naqvi, N. U. Rehman, and S. J. Nawaz, “An improved retinal vessel segmentation framework using frangi filter coupled with the probabilistic patch based denoiser,” IEEE Access, vol. 7, pp. 164 344–164 361, 2019.
  11. A. Khawaja, T. M. Khan, M. A. Khan, and J. Nawaz, “A multi-scale directional line detector for retinal vessel segmentation,” Sensors, vol. 19, no. 22, 2019.
  12. T. M. Khan, A. Robles-Kelly, S. S. Naqvi, and A. Muhammad, “Residual multiscale full convolutional network (rm-fcn) for high resolution semantic segmentation of retinal vasculature,” in Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshops, S+ SSPR 2020, Padua, Italy, January 21–22, 2021, Proceedings.   Springer Nature, 2021, p. 324.
  13. T. M. Khan, M. A. Khan, N. U. Rehman, K. Naveed, I. U. Afridi, S. S. Naqvi, and I. Raazak, “Width-wise vessel bifurcation for improved retinal vessel segmentation,” Biomedical Signal Processing and Control, vol. 71, p. 103169, 2022.
  14. T. M. Khan, S. S. Naqvi, A. Robles-Kelly, and E. Meijering, “Neural network compression by joint sparsity promotion and redundancy reduction,” in Neural Information Processing: 29th International Conference, ICONIP 2022, Virtual Event, November 22–26, 2022, Proceedings, Part I.   Springer International Publishing Cham, 2023, pp. 612–623.
  15. S. Iqbal, T. M. Khan, K. Naveed, S. S. Naqvi, and S. J. Nawaz, “Recent trends and advances in fundus image analysis: A review,” Computers in Biology and Medicine, p. 106277, 2022.
  16. T. M. Khan, M. Arsalan, I. Razzak, and E. Meijering, “Simple and robust depth-wise cascaded network for polyp segmentation,” Engineering Applications of Artificial Intelligence, vol. 121, p. 106023, 2023.
  17. S. S. Naqvi, Z. A. Langah, H. A. Khan, M. I. Khan, T. Bashir, M. Razzak, and T. M. Khan, “Glan: Gan assisted lightweight attention network for biomedical imaging based diagnostics,” Cognitive Computation, vol. 15, no. 3, pp. 932–942, 2023.
  18. T. M. Khan, S. S. Naqvi, A. Robles-Kelly, and I. Razzak, “Retinal vessel segmentation via a multi-resolution contextual network and adversarial learning,” Neural Networks, 2023.
  19. S. Iqbal, S. S. Naqvi, H. A. Khan, A. Saadat, and T. M. Khan, “G-net light: A lightweight modified google net for retinal vessel segmentation,” in Photonics, vol. 9, no. 12.   MDPI, 2022, p. 923.
  20. S. Iqbal, K. Naveed, S. S. Naqvi, A. Naveed, and T. M. Khan, “Robust retinal blood vessel segmentation using a patch-based statistical adaptive multi-scale line detector,” Digital Signal Processing, p. 104075, 2023.
  21. R. B. Oliveira, J. P. Papa, A. S. Pereira, and J. M. R. Tavares, “Computational methods for pigmented skin lesion classification in images: review and future trends,” Neural Computing and Applications, vol. 29, no. 3, pp. 613–636, 2018.
  22. L. Bi, J. Kim, E. Ahn, and D. Feng, “Automatic skin lesion analysis using large-scale dermoscopy images and deep residual networks,” arXiv preprint arXiv:1703.04197, 2017.
  23. K. R. M. Fernando and C. P. Tsokos, “Dynamically weighted balanced loss: class imbalanced learning and confidence calibration of deep neural networks,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  24. N. Gessert, T. Sentker, F. Madesta, R. Schmitz, H. Kniep, I. Baltruschat, R. Werner, and A. Schlaefer, “Skin lesion classification using cnns with patch-based attention and diagnosis-guided loss weighting,” IEEE Transactions on Biomedical Engineering, vol. 67, no. 2, pp. 495–503, 2019.
  25. Z. Wei, Q. Li, and H. Song, “Dual attention based network for skin lesion classification with auxiliary learning,” Biomedical Signal Processing and Control, vol. 74, p. 103549, 2022.
  26. I. Gonzalez-Diaz, “Dermaknet: Incorporating the knowledge of dermatologists to convolutional neural networks for skin lesion diagnosis,” IEEE journal of biomedical and health informatics, vol. 23, no. 2, pp. 547–559, 2018.
  27. P. Tang, Q. Liang, X. Yan, S. Xiang, and D. Zhang, “Gp-cnn-dtel: Global-part cnn model with data-transformed ensemble learning for skin lesion classification,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 10, pp. 2870–2882, 2020.
  28. K. M. Hosny, M. A. Kassem, and M. M. Foaud, “Classification of skin lesions using transfer learning and augmentation with alex-net,” PloS one, vol. 14, no. 5, p. e0217293, 2019.
  29. Y. Xie, J. Zhang, Y. Xia, and C. Shen, “A mutual bootstrapping model for automated skin lesion segmentation and classification,” IEEE Transactions on Medical Imaging, vol. 39, no. 7, pp. 2482–2493, 2020.
  30. Y. Liu, A. Jain, C. Eng, D. H. Way, K. Lee, P. Bui, K. Kanada, G. de Oliveira Marinho, J. Gallegos, S. Gabriele et al., “A deep learning system for differential diagnosis of skin diseases,” Nature medicine, vol. 26, no. 6, pp. 900–908, 2020.
  31. A. Estava, B. Kuprel, R. Novoa, J. Ko, S. Swetter, H. Blau, and S. Thrun, “Dermatologist level classification of skin cancer with deep neural networks [j],” Nature, vol. 542, no. 7639, pp. 115–118, 2017.
  32. T. M. Khan, S. S. Naqvi, and E. Meijering, “Leveraging image complexity in macro-level neural network design for medical image segmentation,” arXiv preprint arXiv:2112.11065, 2021.
  33. T. M. Khan, A. Robles-Kelly, and S. S. Naqvi, “T-Net: A Resource-Constrained Tiny Convolutional Neural Network for Medical Image Segmentation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 644–653.
  34. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  35. L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, “Automated melanoma recognition in dermoscopy images via very deep residual networks,” IEEE transactions on medical imaging, vol. 36, no. 4, pp. 994–1004, 2016.
  36. J. Yang, X. Wu, J. Liang, X. Sun, M.-M. Cheng, P. L. Rosin, and L. Wang, “Self-paced balance learning for clinical skin disease recognition,” IEEE transactions on neural networks and learning systems, vol. 31, no. 8, pp. 2832–2846, 2019.
  37. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  38. I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, and P. Dollár, “Designing network design spaces,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 428–10 436.
  39. M. Belkin, D. Hsu, S. Ma, and S. Mandal, “Reconciling modern machine-learning practice and the classical bias–variance trade-off,” Proceedings of the National Academy of Sciences, vol. 116, no. 32, pp. 15 849–15 854, 2019.
  40. S. S. Han, M. S. Kim, W. Lim, G. H. Park, I. Park, and S. E. Chang, “Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm,” Journal of Investigative Dermatology, vol. 138, no. 7, pp. 1529–1538, 2018.
  41. D. Bisla, A. Choromanska, R. S. Berman, J. A. Stein, and D. Polsky, “Towards automated melanoma detection with deep learning: Data purification and augmentation,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019, pp. 2720–2728.
  42. Y. Wang, M. Huang, X. Zhu, and L. Zhao, “Attention-based lstm for aspect-level sentiment classification,” in Proceedings of the 2016 conference on empirical methods in natural language processing, 2016, pp. 606–615.
  43. H. Chen, M. Sun, C. Tu, Y. Lin, and Z. Liu, “Neural sentiment classification with user and product attention,” in Proceedings of the 2016 conference on empirical methods in natural language processing, 2016, pp. 1650–1659.
  44. H. Xia, Y. Luo, and Y. Liu, “Attention neural collaboration filtering based on gru for recommender systems,” Complex & Intelligent Systems, vol. 7, no. 3, pp. 1367–1379, 2021.
  45. F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang, “Residual attention network for image classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3156–3164.
  46. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  47. J. Zhang, Y. Xie, Y. Xia, and C. Shen, “Attention residual learning for skin lesion classification,” IEEE transactions on medical imaging, vol. 38, no. 9, pp. 2092–2103, 2019.
  48. X. He, Y. Wang, S. Zhao, and C. Yao, “Deep metric attention learning for skin lesion classification in dermoscopy images,” Complex & Intelligent Systems, vol. 8, no. 2, pp. 1487–1504, 2022.
  49. A. He, T. Li, N. Li, K. Wang, and H. Fu, “Cabnet: category attention block for imbalanced diabetic retinopathy grading,” IEEE Transactions on Medical Imaging, vol. 40, no. 1, pp. 143–153, 2020.
  50. P. Tschandl, C. Rosendahl, and H. Kittler, “The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Scientific data, vol. 5, no. 1, pp. 1–9, 2018.
  51. M. Combalia, N. C. Codella, V. Rotemberg, B. Helba, V. Vilaplana, O. Reiter, C. Carrera, A. Barreiro, A. C. Halpern, S. Puig et al., “Bcn20000: Dermoscopic lesions in the wild,” arXiv preprint arXiv:1908.02288, 2019.
  52. N. C. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler et al., “Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic),” in 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018).   IEEE, 2018, pp. 168–172.
  53. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  54. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 318–327, 2020.
  55. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  56. M. A. Kassem, K. M. Hosny, and M. M. Fouad, “Skin lesions classification into eight classes for isic 2019 using deep convolutional neural network and transfer learning,” IEEE Access, vol. 8, pp. 114 822–114 832, 2020.
  57. M. Grandini, E. Bagli, and G. Visani, “Metrics for multi-class classification: an overview,” arXiv preprint arXiv:2008.05756, 2020.
  58. P. Alirezazadeh, M. Schirrmann, and F. Stolzenburg, “Improving deep learning-based plant disease classification with attention mechanism,” Gesunde Pflanzen, pp. 1–11, 2022.
  59. S. Yu, S. Jin, J. Peng, H. Liu, and Y. He, “Application of a new deep learning method with cbam in clothing image classification,” in 2021 IEEE International Conference on Emergency Science and Information Technology (ICESIT).   IEEE, 2021, pp. 364–368.
  60. X. Li, H. Xia, and L. Lu, “Eca-cbam: Classification of diabetic retinopathy: Classification of diabetic retinopathy by cross-combined attention mechanism,” in 2022 the 6th International Conference on Innovation in Artificial Intelligence (ICIAI), 2022, pp. 78–82.
  61. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19.
  62. S. K. Datta, M. A. Shaikh, S. N. Srihari, and M. Gao, “Soft attention improves skin cancer classification performance,” in Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data.   Springer, 2021, pp. 13–23.
  63. A. Nozdryn-Plotnicki, J. Yap, and W. Yolland, “Ensembling convolutional neural networks for skin cancer classification,” International Skin Imaging Collaboration (ISIC) Challenge on Skin Image Analysis for Melanoma Detection. MICCAI, 2018.
  64. N. Gessert, T. Sentker, F. Madesta, R. Schmitz, H. Kniep, I. Baltruschat, R. Werner, and A. Schlaefer, “Skin lesion diagnosis using ensembles, unscaled multi-crop evaluation and loss weighting,” arXiv preprint arXiv:1808.01694, 2018.
  65. J. Zhuang, W. Li, S. Manivannan, R. Wang, J. Zhang, J. Pan, G. Jiang, and Z. Yin, “Skin lesion analysis towards melanoma detection using deep neural network ensemble,” ISIC Challenge, vol. 2018, no. 2, pp. 1–6, 2018.
  66. X. He, E.-L. Tan, H. Bi, X. Zhang, S. Zhao, and B. Lei, “Fully transformer network for skin lesion analysis,” Medical Image Analysis, vol. 77, p. 102357, 2022.
  67. C. Xin, Z. Liu, K. Zhao, L. Miao, Y. Ma, X. Zhu, Q. Zhou, S. Wang, L. Li, F. Yang et al., “An improved transformer network for skin cancer classification,” Computers in Biology and Medicine, p. 105939, 2022.
  68. I. Bakkouri and K. Afdel, “Computer-aided diagnosis (cad) system based on multi-layer feature fusion network for skin lesion recognition in dermoscopy images,” Multimedia Tools and Applications, vol. 79, no. 29, pp. 20 483–20 518, 2020.
  69. S. Zhou, Y. Zhuang, and R. Meng, “Multi-category skin lesion diagnosis using dermoscopy images and deep cnn ensembles,” DysionAI, Tech. Rep, 2019.
  70. S. Benyahia, B. Meftah, and O. Lézoray, “Multi-features extraction based on deep learning for skin lesion classification,” Tissue and Cell, vol. 74, p. 101701, 2022.
  71. J. P. Villa-Pulgarin, A. A. Ruales-Torres, D. Arias-Garzon, M. A. Bravo-Ortiz, H. B. Arteaga-Arteaga, A. Mora-Rubio, J. A. Alzate-Grisales, E. Mercado-Ruiz, M. Hassaballah, S. Orozco-Arias et al., “Optimized convolutional neural network models for skin lesion classification,” Comput. Mater. Contin, vol. 70, no. 2, pp. 2131–2148, 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.