Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PCDAL: A Perturbation Consistency-Driven Active Learning Approach for Medical Image Segmentation and Classification (2306.16918v1)

Published 29 Jun 2023 in eess.IV and cs.CV

Abstract: In recent years, deep learning has become a breakthrough technique in assisting medical image diagnosis. Supervised learning using convolutional neural networks (CNN) provides state-of-the-art performance and has served as a benchmark for various medical image segmentation and classification. However, supervised learning deeply relies on large-scale annotated data, which is expensive, time-consuming, and even impractical to acquire in medical imaging applications. Active Learning (AL) methods have been widely applied in natural image classification tasks to reduce annotation costs by selecting more valuable examples from the unlabeled data pool. However, their application in medical image segmentation tasks is limited, and there is currently no effective and universal AL-based method specifically designed for 3D medical image segmentation. To address this limitation, we propose an AL-based method that can be simultaneously applied to 2D medical image classification, segmentation, and 3D medical image segmentation tasks. We extensively validated our proposed active learning method on three publicly available and challenging medical image datasets, Kvasir Dataset, COVID-19 Infection Segmentation Dataset, and BraTS2019 Dataset. The experimental results demonstrate that our PCDAL can achieve significantly improved performance with fewer annotations in 2D classification and segmentation and 3D segmentation tasks. The codes of this study are available at https://github.com/ortonwang/PCDAL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. P. Vasuda and S. Satheesh, “Improved Fuzzy C-Means Algorithm for MR Brain Image Segmentation,” International Journal on Computer Science and Engineering, vol. 2, no. 5, pp. 1713–1715, 2010.
  2. D. D. Patil and S. G. Deore, “Medical Image Segmentation: a Review,” International Journal of Computer Science and Mobile Computing, vol. 2, no. 1, pp. 22–27, 2013.
  3. R. Wang, Y. Zhou, C. Zhao, and H. Wu, “A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation,” Bio-Medical Materials and Engineering, vol. 26, no. s1, pp. S1345–S1351, 2015.
  4. J. Wang, H. Zhu, S.-H. Wang, and Y.-D. Zhang, “A review of deep learning on medical image analysis,” Mobile Networks and Applications, vol. 26, pp. 351–380, 2021.
  5. S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan, “Medical Image Analysis using Convolutional Neural Networks: A Review,” Journal of Medical Systems, vol. 42, pp. 1–13, 2018.
  6. C. Sun, A. Xu, D. Liu, Z. Xiong, F. Zhao, and W. Ding, “Deep Learning-Based Classification of Liver Cancer Histopathology Images Using Only Global Labels,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 6, pp. 1643–1651, 2019.
  7. R. Wang, S. Cao, K. Ma, Y. Zheng, and D. Meng, “Pairwise learning for medical image segmentation,” Medical Image Analysis, vol. 67, p. 101876, 2021.
  8. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  9. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18.   Springer, 2015, pp. 234–241.
  10. X. Xiao, S. Lian, Z. Luo, and S. Li, “Weighted Res-UNet for high-quality retina vessel segmentation,” in 2018 9th International Conference on Information Technology in Medicine and Education (ITME).   IEEE, 2018, pp. 327–331.
  11. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19.   Springer, 2016, pp. 424–432.
  12. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” arXiv preprint arXiv:2010.11929, 2020.
  13. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 012–10 022.
  14. A. Jiménez-Sánchez, S. Albarqouni, and D. Mateus, “Capsule Networks Against Medical Imaging Data Challenges,” in Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis: 7th Joint International Workshop, CVII-STENT 2018 and Third International Workshop, LABELS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings 3.   Springer, 2018, pp. 150–160.
  15. M. D. Kohli, R. M. Summers, and J. R. Geis, “Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session,” Journal of Digital Imaging, vol. 30, pp. 392–399, 2017.
  16. F. Lyu, M. Ye, J. F. Carlsen, K. Erleben, S. Darkner, and P. C. Yuen, “Pseudo-label guided image synthesis for semi-supervised covid-19 pneumonia infection segmentation,” IEEE Transactions on Medical Imaging, vol. 42, no. 3, pp. 797–809, 2023.
  17. P. Wang, J. Peng, M. Pedersoli, Y. Zhou, C. Zhang, and C. Desrosiers, “Cat: Constrained adversarial training for anatomically-plausible semi-supervised segmentation,” IEEE Transactions on Medical Imaging, pp. 1–1, 2023.
  18. T. Lei, D. Zhang, X. Du, X. Wang, Y. Wan, and A. K. Nandi, “Semi-supervised medical image segmentation using adversarial consistency learning and dynamic convolution network,” IEEE Transactions on Medical Imaging, vol. 42, no. 5, pp. 1265–1277, 2023.
  19. N. Shen, T. Xu, Z. Bian, S. Huang, F. Mu, B. Huang, Y. Xiao, and J. Li, “Scanet: A unified semi-supervised learning framework for vessel segmentation,” IEEE Transactions on Medical Imaging, pp. 1–1, 2022.
  20. D. Tuia, F. Ratle, F. Pacifici, M. F. Kanevski, and W. J. Emery, “Active Learning Methods for Remote Sensing Image Classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 7, pp. 2218–2232, 2009.
  21. D. Tuia, M. Volpi, L. Copa, M. Kanevski, and J. Munoz-Mari, “A survey of active learning algorithms for supervised remote sensing image classification,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 3, pp. 606–617, 2011.
  22. Y. Gal, R. Islam, and Z. Ghahramani, “Deep Bayesian Active Learning with Image Data,” in International Conference on Machine Learning.   PMLR, 2017, pp. 1183–1192.
  23. T. Tran, T.-T. Do, I. Reid, and G. Carneiro, “Bayesian generative active deep learning,” in International Conference on Machine Learning.   PMLR, 2019, pp. 6295–6304.
  24. S. Sinha, S. Ebrahimi, and T. Darrell, “Variational Adversarial Active Learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5972–5981.
  25. U. Patel, H. Dave, and V. Patel, “Hyperspectral Image Classification using Uncertainty and Diversity based Active Learning,” Scalable Computing: Practice and Experience, vol. 22, no. 3, pp. 283–293, 2021.
  26. L. Shi, B. Sun, and D. S. Ibrahim, “An active learning reliability method with multiple kernel functions based on radial basis function,” Structural and Multidisciplinary Optimization, vol. 60, pp. 211–229, 2019.
  27. R. Caramalau, B. Bhattarai, and T.-K. Kim, “Sequential graph convolutional network for active learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 9583–9592.
  28. A. Parvaneh, E. Abbasnejad, D. Teney, G. R. Haffari, A. van den Hengel, and J. Q. Shi, “Active learning by feature mixing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022, pp. 12 237–12 246.
  29. A. Smailagic, P. Costa, A. Gaudio, K. Khandelwal, M. Mirshekari, J. Fagert, D. Walawalkar, S. Xu, A. Galdran, P. Zhang et al., “O-medal: Online active deep learning for medical image analysis,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, no. 4, p. e1353, 2020.
  30. Y. Yang and M. Loog, “Active learning using uncertainty information,” in 2016 23rd International Conference on Pattern Recognition (ICPR).   IEEE, 2016, pp. 2646–2651.
  31. B. J. Erickson, P. Korfiatis, Z. Akkus, and T. L. Kline, “Machine Learning for Medical Imaging,” Radiographics, vol. 37, no. 2, pp. 505–515, 2017.
  32. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
  33. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4700–4708.
  34. M. Tan and Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in International Conference on Machine Learning.   PMLR, 2019, pp. 6105–6114.
  35. Y. Dai, Y. Gao, and F. Liu, “TransMed: Transformers Advance Multi-Modal Medical Image Classification,” Diagnostics, vol. 11, no. 8, p. 1384, 2021.
  36. M. Emre Celebi, Q. Wen, S. Hwang, H. Iyatomi, and G. Schaefer, “Lesion Border Detection in Dermoscopy Images Using Ensembles of Thresholding Methods,” Skin Research and Technology, vol. 19, no. 1, pp. e252–e258, 2013.
  37. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A Nested U-Net Architecture for Medical Image Segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4.   Springer, 2018, pp. 3–11.
  38. Z. Ning, S. Zhong, Q. Feng, W. Chen, and Y. Zhang, “Smu-net: Saliency-guided morphology-aware u-net for breast lesion segmentation in ultrasound image,” IEEE Transactions on Medical Imaging, vol. 41, no. 2, pp. 476–490, 2022.
  39. J. M. J. Valanarasu and V. M. Patel, “UNeXt: MLP-Based Rapid Medical Image Segmentation Network,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V.   Springer, 2022, pp. 23–33.
  40. S. Pang, C. Pang, L. Zhao, Y. Chen, Z. Su, Y. Zhou, M. Huang, W. Yang, H. Lu, and Q. Feng, “Spineparsenet: Spine parsing for volumetric mr image by a two-stage segmentation framework with semantic image representation,” IEEE Transactions on Medical Imaging, vol. 40, no. 1, pp. 262–273, 2021.
  41. E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 077–12 090, 2021.
  42. D. Wang and Y. Shang, “A new active labeling method for deep learning,” in 2014 International Joint Conference on Neural Networks(IJCNN).   IEEE, 2014, pp. 112–119.
  43. N. Houlsby, F. Huszár, Z. Ghahramani, and M. Lengyel, “Bayesian Active Learning for Classification and Preference Learning,” arXiv preprint arXiv:1112.5745, 2011.
  44. A. Kirsch, J. Van Amersfoort, and Y. Gal, “BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  45. R. Pinsler, J. Gordon, E. Nalisnick, and J. M. Hernández-Lobato, “Bayesian Batch Active Learning as Sparse Subset Approximation,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  46. W. Tan, L. Du, and W. Buntine, “Diversity Enhanced Active Learning with Strictly Proper Scoring Rules,” Advances in Neural Information Processing Systems, vol. 34, pp. 10 906–10 918, 2021.
  47. Y. Gal and Z. Ghahramani, “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning,” in Proceedings of The 33rd International Conference on Machine Learning.   PMLR, 2016, pp. 1050–1059.
  48. B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  49. A. G. Wilson and P. Izmailov, “Bayesian Deep Learning and a Probabilistic Perspective of Generalization,” Advances in Neural Information Processing Systems, vol. 33, pp. 4697–4708, 2020.
  50. A. Ashukha, A. Lyzhov, D. Molchanov, and D. Vetrov, “Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning,” arXiv preprint arXiv:2002.06470, 2020.
  51. A. Smailagic, P. Costa, H. Y. Noh, D. Walawalkar, K. Khandelwal, A. Galdran, M. Mirshekari, J. Fagert, S. Xu, P. Zhang et al., “MedAL: Accurate and Robust Deep Active Learning for Medical Image Analysis,” in 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA).   IEEE, 2018, pp. 481–488.
  52. K. Pogorelov, K. R. Randel, C. Griwodz, S. L. Eskeland, T. de Lange, D. Johansen, C. Spampinato, D.-T. Dang-Nguyen, M. Lux, P. T. Schmidt et al., “KVASIR: A Multi-Class Image Dataset for Computer Aided Gastrointestinal Disease Detection,” in Proceedings of the 8th ACM on Multimedia Systems Conference, 2017, pp. 164–169.
  53. M. E. Chowdhury, T. Rahman, A. Khandakar, R. Mazhar, M. A. Kadir, Z. B. Mahbub, K. R. Islam, M. S. Khan, A. Iqbal, N. Al Emadi et al., “Can AI Help in Screening Viral and COVID-19 Pneumonia?” IEEE Access, vol. 8, pp. 132 665–132 676, 2020.
  54. A. Degerli, M. Ahishali, M. Yamac, S. Kiranyaz, M. E. Chowdhury, K. Hameed, T. Hamid, R. Mazhar, and M. Gabbouj, “Covid-19 infection map generation and detection from chest x-ray images,” Health Information Science and Systems, vol. 9, no. 1, p. 15, 2021.
  55. A. M. Tahir, M. E. Chowdhury, A. Khandakar, T. Rahman, Y. Qiblawey, U. Khurshid, S. Kiranyaz, N. Ibtehaz, M. S. Rahman, S. Al-Maadeed et al., “Covid-19 infection localization and severity grading from chest x-ray images,” Computers in Biology and Medicine, vol. 139, p. 105002, 2021.
  56. T. Rahman, A. Khandakar, Y. Qiblawey, A. Tahir, S. Kiranyaz, S. B. A. Kashem, M. T. Islam, S. Al Maadeed, S. M. Zughaier, M. S. Khan et al., “Exploring the effect of image enhancement techniques on covid-19 detection using chest x-ray images,” Computers in Biology and Medicine, vol. 132, p. 104319, 2021.
  57. B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest et al., “The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS),” IEEE Transactions on Medical Imaging, vol. 34, no. 10, pp. 1993–2024, 2014.
  58. X. Luo, “SSL4MIS,” https://github.com/HiLab-git/SSL4MIS, 2020.
  59. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.   IEEE, 2009, pp. 248–255.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Tao Wang (700 papers)
  2. Xinlin Zhang (21 papers)
  3. Yuanbo Zhou (12 papers)
  4. Junlin Lan (4 papers)
  5. Tao Tan (54 papers)
  6. Min Du (46 papers)
  7. Qinquan Gao (11 papers)
  8. Tong Tong (26 papers)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub