Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AnoMalNet: Outlier Detection based Malaria Cell Image Classification Method Leveraging Deep Autoencoder (2303.05789v2)

Published 10 Mar 2023 in eess.IV and cs.CV

Abstract: Class imbalance is a pervasive issue in the field of disease classification from medical images. It is necessary to balance out the class distribution while training a model for decent results. However, in the case of rare medical diseases, images from affected patients are much harder to come by compared to images from non-affected patients, resulting in unwanted class imbalance. Various processes of tackling class imbalance issues have been explored so far, each having its fair share of drawbacks. In this research, we propose an outlier detection based binary medical image classification technique which can handle even the most extreme case of class imbalance. We have utilized a dataset of malaria parasitized and uninfected cells. An autoencoder model titled AnoMalNet is trained with only the uninfected cell images at the beginning and then used to classify both the affected and non-affected cell images by thresholding a loss value. We have achieved an accuracy, precision, recall, and F1 score of 98.49%, 97.07%, 100%, and 98.52% respectively, performing better than large deep learning models and other published works. As our proposed approach can provide competitive results without needing the disease-positive samples during training, it should prove to be useful in binary disease classification on imbalanced datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. WH0, “World malaria report 2022,” 2023. [Online]. Available: https://www.who.int/publications/i/item/9789240064898 (accessed Jan. 02, 2023).
  2. M. K. Gourisaria, S. Das, R. Sharma, S. S. Rautaray, and M. Pandey, “A deep learning model for malaria disease detection and analysis using deep convolutional neural networks,” International Journal on Emerging Technologies, vol. 11, no. 2, pp. 699–704, 2020.
  3. X. Liu, L. Song, S. Liu, and Y. Zhang, “A review of deep-learning-based medical image segmentation methods,” Sustainability, vol. 13, no. 3, p. 1224, Jan. 2021, doi: 10.3390/su13031224.
  4. H. Guan and M. Liu, “Domain adaptation for medical image analysis: a survey,” IEEE Transactions on Biomedical Engineering, vol. 69, no. 3, pp. 1173–1185, Mar. 2022, doi: 10.1109/TBME.2021.3117407.
  5. Q. Quan, J. Wang, and L. Liu, “An effective convolutional neural network for classifying red blood cells in malaria diseases,” Interdisciplinary Sciences: Computational Life Sciences, vol. 12, no. 2, pp. 217–225, Jun. 2020, doi: 10.1007/s12539-020-00367-7.
  6. A. Rogers, M. Gardner, and I. Augenstein, “QA dataset explosion: a taxonomy of nlp resources for question answering and reading comprehension,” ACM Computing Surveys, vol. 55, no. 10, pp. 1–45, Oct. 2023, doi: 10.1145/3560260.
  7. M. Birjali, M. Kasri, and A. Beni-Hssane, “A comprehensive survey on sentiment analysis: Approaches, challenges and trends,” Knowledge-Based Systems, vol. 226, p. 107134, Aug. 2021, doi: 10.1016/j.knosys.2021.107134.
  8. L. Schoneveld, A. Othmani, and H. Abdelkawy, “Leveraging recent advances in deep learning for audio-visual emotion recognition,” Pattern Recognition Letters, vol. 146, pp. 1–7, Jun. 2021, doi: 10.1016/j.patrec.2021.03.007.
  9. W. Lee, J. J. Seong, B. Ozlu, B. S. Shim, A. Marakhimov, and S. Lee, “Biosignal sensors and deep learning-based speech recognition: a review,” Sensors, vol. 21, no. 4, p. 1399, Feb. 2021, doi: 10.3390/s21041399.
  10. M. J. Raihan and A.-A. Nahid, “Malaria cell image classification by explainable artificial intelligence,” Health and Technology, vol. 12, no. 1, pp. 47–58, Jan. 2022, doi: 10.1007/s12553-021-00620-z.
  11. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 4768–4777.
  12. B. N. Narayanan, R. A. Ali, and R. C. Hardie, “Performance analysis of machine learning and deep learning architectures for malaria detection on cell images,” in Applications of Machine Learning, Sep. 2019, p. 29, doi: 10.1117/12.2524681.
  13. A. S. B. Reddy and D. S. Juliet, “Transfer learning with ResNet-50 for malaria cell-image classification,” in 2019 International Conference on Communication and Signal Processing (ICCSP), Apr. 2019, pp. 0945–0949, doi: 10.1109/ICCSP.2019.8697909.
  14. J. A. Quinn, R. Nakasi, P. K. B. Mugagga, P. Byanyima, W. Lubega, and A. Andama, “Deep convolutional neural networks for microscopy-based point of care diagnostics,” in Proceedings of the 1st Machine Learning for Healthcare Conference, 2016, pp. 271–281
  15. D. Bibin, M. S. Nair, and P. Punitha, “Malaria parasite detection from peripheral blood smear images using deep belief networks,” IEEE Access, vol. 5, pp. 9099–9108, 2017, doi: 10.1109/ACCESS.2017.2705642.
  16. S. Lipsa and R. K. Dash, “MalNet – an optimized CNN based method for malaria diagnosis,” in 2022 2nd International Conference on Intelligent Technologies (CONIT), Jun. 2022, pp. 1–6, doi: 10.1109/CONIT55038.2022.9848328.
  17. H. A. Nugroho and R. Nurfauzi, “GGB color normalization and faster-RCNN techniques for malaria parasite detection,” in 2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC), Oct. 2021, pp. 109–113, doi: 10.1109/IBITeC53045.2021.9649152.
  18. C. K. Tan, C. M. Goh, S. A. Z. B. S. Aluwee, S. W. Khor, and M. T. Chai, “Malaria parasite detection using residual attention U-net,” in 2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Sep. 2021, pp. 122–127, doi: 10.1109/ICSIPA52582.2021.9576814.
  19. G. Mariani, F. Scheidegger, R. Istrate, C. Bekas, and C. Malossi, “BAGAN: data augmentation with balancing GAN,” arXiv:1803.09655, Mar. 2018, [Online]. Available: http://arxiv.org/abs/1803.09655.
  20. C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 1, p. 60, Dec. 2019, doi: 10.1186/s40537-019-0197-0.
  21. Y. LeCun, “LeNet-5, convolutional neural networks.” http://yann.lecun.com/exdb/lenet (accessed Jan. 02, 2023).
  22. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations (ICLR 2015), 2015, pp. 1–14.
  23. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.
  24. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: inverted residuals and linear bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 4510–4520, doi: 10.1109/CVPR.2018.00474.
Citations (2)

Summary

We haven't generated a summary for this paper yet.