FMDNN: A Fuzzy-guided Multi-granular Deep Neural Network for Histopathological Image Classification (2407.15312v1)
Abstract: Histopathological image classification constitutes a pivotal task in computer-aided diagnostics. The precise identification and categorization of histopathological images are of paramount significance for early disease detection and treatment. In the diagnostic process of pathologists, a multi-tiered approach is typically employed to assess abnormalities in cell regions at different magnifications. However, feature extraction is often performed at a single granularity, overlooking the multi-granular characteristics of cells. To address this issue, we propose the Fuzzy-guided Multi-granularity Deep Neural Network (FMDNN). Inspired by the multi-granular diagnostic approach of pathologists, we perform feature extraction on cell structures at coarse, medium, and fine granularity, enabling the model to fully harness the information in histopathological images. We incorporate the theory of fuzzy logic to address the challenge of redundant key information arising during multi-granular feature extraction. Cell features are described from different perspectives using multiple fuzzy membership functions, which are fused to create universal fuzzy features. A fuzzy-guided cross-attention module guides universal fuzzy features toward multi-granular features. We propagate these features through an encoder to all patch tokens, aiming to achieve enhanced classification accuracy and robustness. In experiments on multiple public datasets, our model exhibits a significant improvement in accuracy over commonly used classification methods for histopathological image classification and shows commendable interpretability.
- H. Yang et al., “Deep learning-based six-type classifier for lung cancer and mimics from histopathological whole slide images: a retrospective study,” BMC Med., vol. 19, no. 1, pp. 1–14, 2021.
- X. Wang et al., “Weakly supervised deep learning for whole slide lung cancer image analysis,” IEEE Trans. Cybern., vol. 50, no. 9, pp. 3950–3962, 2019.
- A. Vaswani et al., “Attention is all you need,” Advances in Neural Inf. Process. Syst., vol. 30, 2017.
- N. Wahab, A. Khan, and Y. S. Lee, “Transfer learning based deep cnn for segmentation and detection of mitoses in breast cancer histopathological images,” Microsc., vol. 68, no. 3, pp. 216–233, 2019.
- A. Sadafi et al., “Attention based multiple instance learning for classification of blood cell disorders,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23. Springer, 2020, pp. 246–256.
- J. M. J. Valanarasu, P. Oza, I. Hacihaliloglu, and V. M. Patel, “Medical transformer: Gated axial-attention for medical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24. Springer, 2021, pp. 36–46.
- L. Li et al., “Multi-task deep learning for fine-grained classification and grading in breast cancer histopathological images,” Multimedia Tools and Appl., vol. 79, pp. 14 509–14 528, 2020.
- N. Hashimoto et al., “Multi-scale domain-adversarial multiple-instance cnn for cancer subtype classification with unannotated histopathological images,” in Proc. of the IEEE/CVF Conf. on computer vision and pattern recognition, 2020, pp. 3852–3861.
- A. Sinha and J. Dolz, “Multi-scale self-guided attention for medical image segmentation,” IEEE J. Biomed. Health Inform., vol. 25, no. 1, pp. 121–130, 2020.
- C. Xue et al., “Global guidance network for breast lesion segmentation in ultrasound images,” Med. Image Anal., vol. 70, p. 101989, 2021.
- W. Ding, J. Nayak, B. Naik, D. Pelusi, and M. Mishra, “Fuzzy and real-coded chemical reaction optimization for intrusion detection in industrial big data environment,” IEEE Trans. Ind. Informat., vol. 17, no. 6, pp. 4298–4307, 2020.
- W. Ding et al., “An unsupervised fuzzy clustering approach for early screening of covid-19 from radiological images,” IEEE Trans. Fuzzy Syst., vol. 30, no. 8, pp. 2902–2914, 2021.
- T. I. Ahmed, J. Bhola, M. Shabaz, J. Singla, M. Rakhra, S. More, I. A. Samori et al., “Fuzzy logic-based systems for the diagnosis of chronic kidney disease,” BioMed Res. Int., vol. 2022, 2022.
- W. Ding, M. Abdel-Basset, K. A. Eldrandaly, L. Abdel-Fatah, and V. H. C. De Albuquerque, “Smart supervision of cardiomyopathy based on fuzzy harris hawks optimizer and wearable sensing data optimization: a new model,” IEEE Trans. Cybern., vol. 51, no. 10, pp. 4944–4958, 2020.
- S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, “Transformers in vision: A survey,” ACM computing surveys (CSUR), vol. 54, no. 10s, pp. 1–41, 2022.
- W. Wang et al., “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021, pp. 568–578.
- S. Zheng et al., “Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, pp. 6881–6890.
- W. Tang, F. He, Y. Liu, and Y. Duan, “Matr: Multimodal medical image fusion via multiscale adaptive transformer,” IEEE Trans. Image Process., vol. 31, pp. 5134–5149, 2022.
- L. Yuan, Q. Hou, Z. Jiang, J. Feng, and S. Yan, “Volo: Vision outlooker for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 5, pp. 6575–6586, 2022.
- X. Chu et al., “Twins: Revisiting the design of spatial attention in vision transformers,” Advances in Neural Inf. Process. Syst., vol. 34, pp. 9355–9366, 2021.
- H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” in Int. Conf. on Machine Learning. PMLR, 2021, pp. 10 347–10 357.
- Z. Chen, Y. Song, T.-H. Chang, and X. Wan, “Generating radiology reports via memory-driven transformer,” arXiv preprint arXiv:2010.16056, 2020.
- L. A. Zadeh, “Fuzzy sets,” Inf. and Control, vol. 8, no. 3, pp. 338–353, 1965.
- M. Wan, X. Chen, T. Zhan, C. Xu, G. Yang, and H. Zhou, “Sparse fuzzy two-dimensional discriminant local preserving projection (sf2ddlpp) for robust image feature extraction,” Inf. Sci., vol. 563, pp. 1–15, 2021.
- K. Bhalla, D. Koundal, B. Sharma, Y.-C. Hu, and A. Zaguia, “A fuzzy convolutional neural network for enhancing multi-focus image fusion,” J. of Vis. Communication and Image Representation, vol. 84, p. 103485, 2022.
- W. Ding, M. Abdel-Basset, H. Hawash, and W. Pedrycz, “Multimodal infant brain segmentation by fuzzy-informed deep learning,” IEEE Trans. Fuzzy Syst., vol. 30, no. 4, pp. 1088–1101, 2021.
- M. Kumar, M. Alshehri, R. AlGhamdi, P. Sharma, and V. Deep, “A de-ann inspired skin cancer detection approach using fuzzy c-means clustering,” Mobile Networks and Appl., vol. 25, pp. 1319–1329, 2020.
- W. Ding et al., “Ftranscnn: Fusing transformer and a cnn based on fuzzy logic for uncertain medical image segmentation,” Inf. Fusion, p. 101880, 2023.
- M. Hu, Y. Zhong, S. Xie, H. Lv, and Z. Lv, “Fuzzy system based medical image processing for brain disease prediction,” Frontiers in Neuroscience, vol. 15, p. 714318, 2021.
- F. Orujov, R. Maskeliūnas, R. Damaševičius, and W. Wei, “Fuzzy based image edge detection algorithm for blood vessel detection in retinal images,” Appl. Soft Comput., vol. 94, p. 106452, 2020.
- L. Wang, J. Zhang, Y. Liu, J. Mi, and J. Zhang, “Multimodal medical image fusion based on gabor representation combination of multi-cnn and fuzzy neural network,” IEEE Access, vol. 9, pp. 67 634–67 647, 2021.
- H. Das, B. Naik, and H. Behera, “Medical disease analysis using neuro-fuzzy with feature extraction model for classification,” Inform. in Med. Unlocked, vol. 18, p. 100288, 2020.
- R. Zhang, J. Shen, F. Wei, X. Li, and A. K. Sangaiah, “Medical image classification based on multi-scale non-negative sparse coding,” Artif. Intell. in Med., vol. 83, pp. 44–51, 2017.
- A. Lin, B. Chen, J. Xu, Z. Zhang, G. Lu, and D. Zhang, “Ds-transunet: Dual swin transformer u-net for medical image segmentation,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–15, 2022.
- Y. Kong, G. Z. Genchev, X. Wang, H. Zhao, and H. Lu, “Nuclear segmentation in histopathological images using two-stage stacked u-nets with attention mechanism,” Frontiers in Bioengineering and Biotechnology, vol. 8, p. 573866, 2020.
- C. Kaul, S. Manandhar, and N. Pears, “Focusnet: An attention-based fully convolutional network for medical image segmentation,” in 2019 IEEE 16th Int. Symposium on Biomedical Imaging (ISBI 2019). IEEE, 2019, pp. 455–458.
- Q. Zhu et al., “Deep multi-modal discriminative and interpretability network for alzheimer’s disease diagnosis,” IEEE Trans. Med. Imag., 2022.
- Z. Zhang, H. Fu, H. Dai, J. Shen, Y. Pang, and L. Shao, “Et-net: A generic edge-attention guidance network for medical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd Int. Conf., Shenzhen, China, October 13–17, 2019, Proc., Part I 22. Springer, 2019, pp. 442–450.
- C.-F. R. Chen, Q. Fan, and R. Panda, “Crossvit: Cross-attention multi-scale vision transformer for image classification,” in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021, pp. 357–366.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
- A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
- X. Huo et al., “Hifuse: Hierarchical multi-scale feature fusion network for medical image classification,” Biomed. Signal Process. and Control, vol. 87, p. 105534, 2024.
- I. O. Tolstikhin et al., “Mlp-mixer: An all-mlp architecture for vision,” Advances in Neural Inf. Process. Syst., vol. 34, pp. 24 261–24 272, 2021.
- C. L. Srinidhi, S. W. Kim, F.-D. Chen, and A. L. Martel, “Self-supervised driven consistency training for annotation efficient histopathology image analysis,” Med. Image Anal., vol. 75, p. 102256, 2022.
- S. Mehmood et al., “Malignancy detection in lung and colon histopathology images using transfer learning with class selective image processing,” IEEE Access, vol. 10, pp. 25 657–25 668, 2022.
- M. Masud, N. Sikder, A.-A. Nahid, A. K. Bairagi, and M. A. AlZain, “A machine learning approach to diagnosing lung and colon cancer using a deep learning-based classification framework,” Sensors, vol. 21, no. 3, p. 748, 2021.
- F. Haghighi, M. R. H. Taher, M. B. Gotway, and J. Liang, “Dira: Discriminative, restorative, and adversarial learning for self-supervised medical image analysis,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2022, pp. 20 824–20 834.
- X. Wang et al., “Transformer-based unsupervised contrastive learning for histopathological image classification,” Med. Image Anal., vol. 81, p. 102559, 2022.
- R. Mormont, P. Geurts, and R. Marée, “Multi-task pre-training of deep neural networks for digital pathology,” IEEE J. Biomed. Health Inform., vol. 25, no. 2, pp. 412–421, 2020.
- Z. Song et al., “Nucleus-aware self-supervised pretraining using unpaired image-to-image translation for histopathology images,” IEEE Trans. Med. Imag., 2023.
- C. Mohanty et al., “Using deep learning architectures for detection and classification of diabetic retinopathy,” Sensors, vol. 23, no. 12, p. 5726, 2023.
- S. H. Kassani, P. H. Kassani, R. Khazaeinezhad, M. J. Wesolowski, K. A. Schneider, and R. Deters, “Diabetic retinopathy classification using a modified xception architecture,” in 2019 IEEE Int. Symposium on Signal Proc. and Information Technology (ISSPIT). IEEE, 2019, pp. 1–6.
- F. Alenezi, A. Armghan, and K. Polat, “Wavelet transform based deep residual neural network and relu based extreme learning machine for skin lesion classification,” Expert Syst. with Appl., vol. 213, p. 119064, 2023.
- A. Mehmood, Y. Gulzar, Q. M. Ilyas, A. Jabbari, M. Ahmad, and S. Iqbal, “Sbxception: A shallower and broader xception architecture for efficient classification of skin lesions,” Cancers, vol. 15, no. 14, p. 3604, 2023.
- W. Wang, X. Yang, X. Li, and J. Tang, “Convolutional-capsule network for gastrointestinal endoscopy image classification,” Int. J. of Intell. Syst., vol. 37, no. 9, pp. 5796–5815, 2022.
- Radford et al., “Learning transferable visual models from natural language supervision,” in International Conf. on Machine Learning. PMLR, 2021, pp. 8748–8763.