Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Interactive Segmentation of Medical Images: A Systematic Review and Taxonomy (2311.13964v2)

Published 23 Nov 2023 in eess.IV, cs.AI, cs.CV, cs.HC, and cs.LG

Abstract: Interactive segmentation is a crucial research area in medical image analysis aiming to boost the efficiency of costly annotations by incorporating human feedback. This feedback takes the form of clicks, scribbles, or masks and allows for iterative refinement of the model output so as to efficiently guide the system towards the desired behavior. In recent years, deep learning-based approaches have propelled results to a new level causing a rapid growth in the field with 121 methods proposed in the medical imaging domain alone. In this review, we provide a structured overview of this emerging field featuring a comprehensive taxonomy, a systematic review of existing methods, and an in-depth analysis of current practices. Based on these contributions, we discuss the challenges and opportunities in the field. For instance, we find that there is a severe lack of comparison across methods which needs to be tackled by standardized baselines and benchmarks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (180)
  1. M. Rajchl, M. C. Lee, O. Oktay, K. Kamnitsas, J. Passerat-Palmbach, W. Bai, M. Damodaram, M. A. Rutherford, J. V. Hajnal, B. Kainz, et al., “Deepcut: Object segmentation from bounding box annotations using convolutional neural networks,” IEEE transactions on medical imaging, vol. 36, no. 2, pp. 674–683, 2016.
  2. M. Amrehn, S. Gaube, M. Unberath, F. Schebesch, T. Horz, M. Strumia, S. Steidl, M. Kowarschik, and A. Maier, “Ui-net: interactive artificial neural networks for iterative image segmentation based on a user model,” in Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine, pp. 143–147, 2017.
  3. J. Sun, Y. Shi, Y. Gao, and D. Shen, “A point says a lot: an interactive segmentation method for mr prostate via one-point labeling,” in Machine Learning in Medical Imaging: 8th International Workshop, MLMI 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 10, 2017, Proceedings 8, pp. 220–228, Springer, 2017.
  4. Y. B. Can, K. Chaitanya, B. Mustafa, L. M. Koch, E. Konukoglu, and C. F. Baumgartner, “Learning to segment medical images with scribble-supervision alone,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4, pp. 236–244, Springer, 2018.
  5. G. Wang, M. A. Zuluaga, W. Li, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S. Ourselin, et al., “Deepigeos: a deep interactive geodesic framework for medical image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 7, pp. 1559–1572, 2018.
  6. G. Wang, W. Li, M. A. Zuluaga, R. Pratt, P. A. Patel, M. Aertsen, T. Doel, A. L. David, J. Deprest, S. Ourselin, et al., “Interactive medical image segmentation using deep learning with image-specific fine tuning,” IEEE transactions on medical imaging, vol. 37, no. 7, pp. 1562–1573, 2018.
  7. G. Bredell, C. Tanner, and E. Konukoglu, “Iterative interaction training for segmentation editing networks,” in Machine Learning in Medical Imaging: 9th International Workshop, MLMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings 9, pp. 363–370, Springer, 2018.
  8. A. K. Dhara, K. R. Ayyalasomayajula, E. Arvids, M. Fahlström, J. Wikström, E.-M. Larsson, and R. Strand, “Segmentation of post-operative glioblastoma in mri by u-net with patient-specific interactive refinement,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I 4, pp. 115–122, Springer, 2019.
  9. Y. Tang, A. P. Harrison, M. Bagheri, J. Xiao, and R. M. Summers, “Semi-automatic recist labeling on ct scans with cascaded convolutional neural networks,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part IV 11, pp. 405–413, Springer, 2018.
  10. T. Sakinis, F. Milletari, H. Roth, P. Korfiatis, P. Kostandy, K. Philbrick, Z. Akkus, Z. Xu, D. Xu, and B. J. Erickson, “Interactive segmentation of medical images through fully convolutional neural networks,” arXiv preprint arXiv:1903.08205, 2019.
  11. B. Zhou, L. Chen, and Z. Wang, “Interactive deep editing framework for medical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part III 22, pp. 329–337, Springer, 2019.
  12. S. Khan, A. H. Shahin, J. Villafruela, J. Shen, and L. Shao, “Extreme points derived confidence map as a cue for class-agnostic interactive segmentation using deep neural network,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II 22, pp. 66–73, Springer, 2019.
  13. W. Lei, H. Wang, R. Gu, S. Zhang, S. Zhang, and G. Wang, “Deepigeos-v2: deep interactive segmentation of multiple organs from head and neck images with lightweight cnns,” in Large-Scale Annotation of Biomedical Data and Expert Label Synthesis and Hardware Aware Learning for Medical Imaging and Computer Assisted Intervention: International Workshops, LABELS 2019, HAL-MICCAI 2019, and CuRIOUS 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13 and 17, 2019, Proceedings 4, pp. 61–69, Springer, 2019.
  14. G. Aresta, C. Jacobs, T. Araújo, A. Cunha, I. Ramos, B. van Ginneken, and A. Campilho, “iw-net: an automatic and minimalistic interactive lung nodule segmentation deep network,” Scientific reports, vol. 9, no. 1, pp. 1–9, 2019.
  15. H. Roth, L. Zhang, D. Yang, F. Milletari, Z. Xu, X. Wang, and D. Xu, “Weakly supervised segmentation from extreme points,” in Large-Scale Annotation of Biomedical Data and Expert Label Synthesis and Hardware Aware Learning for Medical Imaging and Computer Assisted Intervention: International Workshops, LABELS 2019, HAL-MICCAI 2019, and CuRIOUS 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13 and 17, 2019, Proceedings 4, pp. 42–50, Springer, 2019.
  16. L. Cerrone, A. Zeilmann, and F. A. Hamprecht, “End-to-end learned random walker for seeded image segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12559–12568, 2019.
  17. H. Zheng, Y. Chen, X. Yue, and C. Ma, “Deep interactive segmentation of uncertain regions with shadowed sets,” in Proceedings of the Third International Symposium on Image Computing and Digital Medicine, pp. 244–248, 2019.
  18. C.-H. Chao, Y.-C. Cheng, H.-T. Cheng, C.-W. Huang, T.-Y. Ho, C.-K. Tseng, L. Lu, and M. Sun, “Radiotherapy target contouring with convolutional gated graph neural network,” arXiv preprint arXiv:1904.03086, 2019.
  19. M. Längkvist, J. Widell, P. Thunberg, A. Loutfi, and M. Lidén, “Interactive user interface based on convolutional auto-encoders for annotating ct-scans,” arXiv preprint arXiv:1904.11701, 2019.
  20. X. Wang, L. Zhang, H. Roth, D. Xu, and Z. Xu, “Interactive 3d segmentation editing and refinement via gated graph neural networks,” in Graph Learning in Medical Imaging: First International Workshop, GLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Proceedings, pp. 9–17, Springer, 2019.
  21. T. Boers, Y. Hu, E. Gibson, D. Barratt, E. Bonmati, J. Krdzalic, F. van der Heijden, J. Hermans, and H. Huisman, “Interactive 3d u-net for the segmentation of the pancreas in computed tomography scans,” Physics in Medicine & Biology, vol. 65, no. 6, p. 065002, 2020.
  22. G. Wang, M. Aertsen, J. Deprest, S. Ourselin, T. Vercauteren, and S. Zhang, “Uncertainty-guided efficient interactive refinement of fetal brain segmentation from stacks of mri slices,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23, pp. 279–288, Springer, 2020.
  23. X. Liao, W. Li, Q. Xu, X. Wang, B. Jin, X. Zhang, Y. Wang, and Y. Zhang, “Iteratively-refined interactive 3d medical image segmentation with multi-agent reinforcement learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9394–9402, 2020.
  24. A. Raju, Z. Ji, C. T. Cheng, J. Cai, J. Huang, J. Xiao, L. Lu, C. Liao, and A. P. Harrison, “User-guided domain adaptation for rapid annotation from user interactions: a study on pathological liver segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I, pp. 457–467, Springer, 2020.
  25. C. Ma, Q. Xu, X. Wang, B. Jin, X. Zhang, Y. Wang, and Y. Zhang, “Boundary-aware supervoxel-level iteratively refined interactive 3d image segmentation with multi-agent reinforcement learning,” IEEE Transactions on Medical Imaging, vol. 40, no. 10, pp. 2563–2574, 2020.
  26. N. A. Koohbanani, M. Jahanifar, N. Z. Tajadin, and N. Rajpoot, “Nuclick: a deep learning framework for interactive segmentation of microscopic images,” Medical Image Analysis, vol. 65, p. 101771, 2020.
  27. T. Kitrungrotsakul, I. Yutaro, L. Lin, R. Tong, J. Li, and Y.-W. Chen, “Interactive deep refinement network for medical image segmentation,” arXiv preprint arXiv:2006.15320, 2020.
  28. A. Pepe, R. Schussnig, J. Li, C. Gsaxner, X. Chen, T.-P. Fries, and J. Egger, “Iris: interactive real-time feedback image segmentation with deep learning,” in Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 11317, pp. 181–186, SPIE, 2020.
  29. W. Hu, X. Yao, Z. Zheng, X. Zhang, Y. Zhong, X. Wang, Y. Zhang, and Y. Wang, “Error attention interactive segmentation of medical image through matting and fusion,” in Machine Learning in Medical Imaging: 11th International Workshop, MLMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings 11, pp. 11–20, Springer, 2020.
  30. Z. Tian, X. Li, Y. Zheng, Z. Chen, Z. Shi, L. Liu, and B. Fei, “Graph-convolutional-network-based interactive prostate segmentation in mr images,” Medical physics, vol. 47, no. 9, pp. 4164–4176, 2020.
  31. C.-H. Chao, H.-T. Cheng, T.-Y. Ho, L. Lu, and M. Sun, “Interactive radiotherapy target delineation with 3d-fused context propagation,” arXiv preprint arXiv:2012.06873, 2020.
  32. Y. Tang, K. Yan, J. Xiao, and R. M. Summers, “One click lesion recist measurement and segmentation on ct scans,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23, pp. 573–583, Springer, 2020.
  33. H. Jinbo, T. Kitrungrotsaku, Y. Iwamoto, L. Lin, H. Hu, and Y.-W. Chen, “Development of an interactive semantic medical image segmentation system,” in 2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), pp. 678–681, IEEE, 2020.
  34. K. B. Girum, G. Créhange, R. Hussain, and A. Lalande, “Fast interactive medical image segmentation with weakly supervised deep learning method,” International Journal of Computer Assisted Radiology and Surgery, vol. 15, pp. 1437–1444, 2020.
  35. D. J. Ho, N. P. Agaram, P. J. Schüffler, C. M. Vanderbilt, M.-H. Jean, M. R. Hameed, and T. J. Fuchs, “Deep interactive learning: an efficient labeling approach for deep learning-based osteosarcoma treatment response assessment,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part V 23, pp. 540–549, Springer, 2020.
  36. M. X.-L. Foo, S. T. Kim, M. Paschali, L. Goli, E. Burian, M. Makowski, R. Braren, N. Navab, and T. Wendler, “Interactive segmentation for covid-19 infection quantification on longitudinal ct scans,” arXiv preprint arXiv:2110.00948, 2021.
  37. A. Menon, P. Singh, P. Vinod, and C. Jawahar, “Interactive learning for assisting whole slide image annotation,” in Asian Conference on Pattern Recognition, pp. 504–517, Springer, 2021.
  38. X. Luo, G. Wang, T. Song, J. Zhang, M. Aertsen, J. Deprest, S. Ourselin, T. Vercauteren, and S. Zhang, “Mideepseg: Minimally interactive segmentation of unseen objects from medical images using deep learning,” Medical image analysis, vol. 72, p. 102102, 2021.
  39. R. Feng, X. Zheng, T. Gao, J. Chen, W. Wang, D. Z. Chen, and J. Wu, “Interactive few-shot learning: Limited supervision, better medical image segmentation,” IEEE Transactions on Medical Imaging, vol. 40, no. 10, pp. 2575–2588, 2021.
  40. H. R. Roth, D. Yang, Z. Xu, X. Wang, and D. Xu, “Going to extremes: weakly supervised medical image segmentation,” Machine Learning and Knowledge Extraction, vol. 3, no. 2, pp. 507–524, 2021.
  41. B. Sambaturu, A. Gupta, C. Jawahar, and C. Arora, “Efficient and generic interactive segmentation framework to correct mispredictions during clinical evaluation of medical images,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24, pp. 625–635, Springer, 2021.
  42. T. Zhou, L. Li, G. Bredell, J. Li, and E. Konukoglu, “Quality-aware memory network for interactive volumetric image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24, pp. 560–570, Springer, 2021.
  43. H. Williams, J. Pedrosa, L. Cattani, S. Housmans, T. Vercauteren, J. Deprest, and J. D’hooge, “Interactive segmentation via deep learning and b-spline explicit active surfaces,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24, pp. 315–325, Springer, 2021.
  44. X. Li, M. Qiao, Y. Guo, J. Zhou, S. Zhou, C. Chang, and Y. Wang, “Wdtiseg: One-stage interactive segmentation for breast ultrasound image using weighted distance transform and shape-aware compound loss,” Applied Sciences, vol. 11, no. 14, p. 6279, 2021.
  45. W. Li, Q. Xu, C. Shen, B. Hu, F. Zhu, Y. Li, B. Jin, and X. Wang, “Interactive medical image segmentation with self-adaptive confidence calibration,” arXiv preprint arXiv:2111.07716, 2021.
  46. J. Deng and X. Xie, “3d interactive segmentation with semi-implicit representation and active learning,” IEEE Transactions on Image Processing, vol. 30, pp. 9402–9417, 2021.
  47. J. Zhang, Y. Shi, J. Sun, L. Wang, L. Zhou, Y. Gao, and D. Shen, “Interactive medical image segmentation via a point-based interaction,” Artificial Intelligence in Medicine, vol. 111, p. 101998, 2021.
  48. E. Zheng, Q. Yu, R. Li, P. Shi, and A. Haake, “A continual learning framework for uncertainty-aware interactive image segmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 6030–6038, 2021.
  49. J.-W. Zhang, W. Chen, K. I. Ly, X. Zhang, F. Yan, J. Jordan, G. Harris, S. Plotkin, P. Hao, and W. Cai, “Dins: deep interactive networks for neurofibroma segmentation in neurofibromatosis type 1 on whole-body mri,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 2, pp. 786–797, 2021.
  50. Z. Tian, X. Li, Z. Chen, Y. Zheng, H. Fan, Z. Li, C. Li, and S. Du, “Interactive prostate mr image segmentation based on convlstms and ggnn,” Neurocomputing, vol. 438, pp. 84–93, 2021.
  51. D. Jiang, Y. Wang, F. Zhou, H. Ma, W. Zhang, W. Fang, P. Zhao, and Z. Tong, “Residual refinement for interactive skin lesion segmentation,” Journal of Biomedical Semantics, vol. 12, no. 1, p. 22, 2021.
  52. Y. Bai, G. Sun, Y. Li, L. Shen, and L. Zhang, “Progressive medical image annotation with convolutional neural network-based interactive segmentation method,” in Medical Imaging 2021: Image Processing, vol. 11596, pp. 732–742, SPIE, 2021.
  53. S. Cho, H. Jang, J. W. Tan, and W.-K. Jeong, “Deepscribble: interactive pathology image segmentation using deep neural networks with scribbles,” in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 761–765, IEEE, 2021.
  54. T. Kitrungrotsakul, Q. Chen, H. Wu, Y. Iwamoto, H. Hu, W. Zhu, C. Chen, F. Xu, Y. Zhou, L. Lin, et al., “Attention-refnet: Interactive attention refinement network for infected area segmentation of covid-19,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 7, pp. 2363–2373, 2021.
  55. R. Daulatabad, R. Vega, J. L. Jaremko, J. Kapur, A. R. Hareendranathan, and K. Punithakumar, “Integrating user-input into deep convolutional neural networks for thyroid nodule segmentation,” in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2637–2640, IEEE, 2021.
  56. X. H. Manh, H. Vu, X. D. Nguyen, L. H. P. Tu, H. D. Viet, P. B. Nguyen, and M. H. Nguyen, “Interactive z-line segmentation tool for upper gastrointestinal endoscopy images using binary partition tree and u-net,” in 2021 RIVF International Conference on Computing and Communication Technologies (RIVF), pp. 1–6, IEEE, 2021.
  57. M. J. Trimpl, D. Boukerroui, E. P. Stride, K. A. Vallis, and M. J. Gooding, “Interactive contouring through contextual deep learning,” Medical Physics, vol. 48, no. 6, pp. 2951–2959, 2021.
  58. Y. Fang, D. Zhu, N. Zhou, L. Liu, and J. Yao, “Pipo-net: A semi-automatic and polygon-based annotation method for pathological images,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2978–2984, IEEE, 2021.
  59. M. Jahanifar, N. Z. Tajeddin, N. A. Koohbanani, and N. M. Rajpoot, “Robust interactive semantic segmentation of pathology images with minimal user input,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 674–683, 2021.
  60. L. Sun, Z. Tian, Z. Chen, W. Luo, and S. Du, “An efficient interactive segmentation framework for medical images without pre-training,” Medical Physics, 2022.
  61. M. Shahedi, J. D. Dormer, M. Halicek, and B. Fei, “The effect of image annotation with minimal manual interaction for semiautomatic prostate segmentation in ct images using fully convolutional neural networks,” Medical physics, vol. 49, no. 2, pp. 1153–1160, 2022.
  62. A. Atzeni, L. Peter, E. Robinson, E. Blackburn, J. Althonayan, D. C. Alexander, and J. E. Iglesias, “Deep active learning for suggestive segmentation of biomedical image stacks via optimisation of dice scores and traced boundary length,” Medical Image Analysis, vol. 81, p. 102549, 2022.
  63. L. Bi, M. Fulham, and J. Kim, “Hyper-fusion network for semi-automatic segmentation of skin lesions,” Medical image analysis, vol. 76, p. 102334, 2022.
  64. Q. Liu, Z. Xu, Y. Jiao, and M. Niethammer, “isegformer: Interactive segmentation via transformers with application to 3d knee mr images,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V, pp. 464–474, Springer, 2022.
  65. M. Asad, L. Fidon, and T. Vercauteren, “Econet: Efficient convolutional online likelihood network for scribble-based interactive segmentation,” in International Conference on Medical Imaging with Deep Learning, pp. 35–47, PMLR, 2022.
  66. K. Gotkowski, C. Gonzalez, I. Kaltenborn, R. Fischbach, A. Bucher, and A. Mukhopadhyay, “i3deep: Efficient 3d interactive segmentation with the nnu-net,” in International Conference on Medical Imaging with Deep Learning, pp. 441–456, PMLR, 2022.
  67. A. Diaz-Pinto, P. Mehta, S. Alle, M. Asad, R. Brown, V. Nath, A. Ihsani, M. Antonelli, D. Palkovics, C. Pinter, et al., “Deepedit: Deep editable learning for interactive segmentation of 3d medical images,” in Data Augmentation, Labelling, and Imperfections: Second MICCAI Workshop, DALI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings, pp. 11–21, Springer, 2022.
  68. W. Liu, C. Ma, Y. Yang, W. Xie, and Y. Zhang, “Transforming the interactive segmentation for medical imaging,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part IV, pp. 704–713, Springer, 2022.
  69. L. Shi, X. Zhang, Y. Liu, and X. Han, “A hybrid propagation network for interactive volumetric image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part IV, pp. 673–682, Springer, 2022.
  70. M. Zhuang, Z. Chen, H. Wang, H. Tang, J. He, B. Qin, Y. Yang, X. Jin, M. Yu, B. Jin, et al., “Anatomysketch: An extensible open-source software platform for medical image analysis algorithm development,” Journal of Digital Imaging, pp. 1–11, 2022.
  71. G. Galisot, J.-Y. Ramel, T. Brouard, E. Chaillou, and B. Serres, “Visual and structural feature combination in an interactive machine learning system for medical image segmentation,” Machine Learning with Applications, vol. 8, p. 100294, 2022.
  72. Z. Lin, Z. Zhang, L.-H. Han, and S.-P. Lu, “Multi-mode interactive image segmentation,” in Proceedings of the 30th ACM International Conference on Multimedia, pp. 905–914, 2022.
  73. R. Pirabaharan and N. Khan, “Interactive segmentation using u-net with weight map and dynamic user interactions,” in 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 4754–4757, IEEE, 2022.
  74. I. Mikhailov, B. Chauveau, N. Bourdel, and A. Bartoli, “A deep learning-based interactive medical image segmentation framework,” in Applications of Medical Artificial Intelligence: First International Workshop, AMAI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings, pp. 98–107, Springer, 2022.
  75. R. Pirabaharan and N. Khan, “Improving interactive segmentation using a novel weighted loss function with an adaptive click size and two-stream fusion,” in 2022 IEEE Eighth International Conference on Multimedia Big Data (BigMM), pp. 7–12, IEEE, 2022.
  76. X. Chen, B. Zhou, L. Xiong, C. Zhao, L. Wang, Y. Zhang, and H. Xu, “Balancing regional and global information: An interactive segmentation framework for ultrasound breast lesion,” Biomedical Signal Processing and Control, vol. 77, p. 103723, 2022.
  77. S. Liang, H. Lu, M. Zang, X. Wang, Y. Jiao, T. Zhao, E. Y. Xu, and J. Xu, “Deep sed-net with interactive learning for multiple testicular cell types segmentation and cell composition analysis in mouse seminiferous tubules,” Cytometry Part A, vol. 101, no. 8, pp. 658–674, 2022.
  78. M. Ju, M. Lee, J. Lee, J. Yang, S. Yoon, and Y. Kim, “All you need is a few dots to label ct images for organ segmentation,” Applied Sciences, vol. 12, no. 3, p. 1328, 2022.
  79. W. Ma, S. Zheng, L. Zhang, H. Zhang, and Q. Dou, “Rapid model transfer for medical image segmentation via iterative human-in-the-loop update: from labelled public to unlabelled clinical datasets for multi-organ segmentation in ct,” in 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), pp. 1–5, IEEE, 2022.
  80. T. Bai, A. Balagopal, M. Dohopolski, H. E. Morgan, R. McBeth, J. Tan, M.-H. Lin, D. J. Sher, D. Nguyen, and S. Jiang, “A proof-of-concept study of artificial intelligence–assisted contour editing,” Radiology: Artificial Intelligence, vol. 4, no. 5, p. e210214, 2022.
  81. T. Zhou, L. Li, G. Bredell, J. Li, J. Unkelbach, and E. Konukoglu, “Volumetric memory network for interactive medical image segmentation,” Medical Image Analysis, vol. 83, p. 102599, 2023.
  82. V. J. Hallitschke, T. Schlumberger, P. Kataliakos, Z. Marinov, M. Kim, L. Heiliger, C. Seibold, J. Kleesiek, and R. Stiefelhagen, “Multimodal interactive lung lesion segmentation: A framework for annotating pet/ct images based on physiological and anatomical cues,” arXiv preprint arXiv:2301.09914, 2023.
  83. Q. Liu, M. Zheng, B. Planche, Z. Gao, T. Chen, M. Niethammer, and Z. Wu, “Exploring cycle consistency learning in interactive volume segmentation,” arXiv preprint arXiv:2303.06493, 2023.
  84. A. Bruzadin, M. Boaventura, M. Colnago, R. G. Negri, and W. Casaca, “Learning label diffusion maps for semi-automatic segmentation of lung ct images with covid-19,” Neurocomputing, vol. 522, pp. 24–38, 2023.
  85. M. Asad, H. Williams, I. Mandal, S. Ather, J. Deprest, J. D’hooge, and T. Vercauteren, “Adaptive multi-scale online likelihood network for ai-assisted interactive segmentation,” arXiv preprint arXiv:2303.13696, 2023.
  86. A. H. Shahin, Y. Zhuang, and N. El-Zehiry, “From sparse to precise: A practical editing approach for intracardiac echocardiography segmentation,” arXiv preprint arXiv:2303.11041, 2023.
  87. M. Zhuang, Z. Chen, H. Wang, H. Tang, J. He, B. Qin, Y. Yang, X. Jin, M. Yu, B. Jin, et al., “Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images,” International Journal of Computer Assisted Radiology and Surgery, vol. 18, no. 2, pp. 379–394, 2023.
  88. D. J. Ho, M. H. Chui, C. M. Vanderbilt, J. Jung, M. E. Robson, C.-S. Park, J. Roh, and T. J. Fuchs, “Deep interactive learning-based ovarian cancer segmentation of h&e-stained whole slide images to study morphological patterns of brca mutation,” Journal of Pathology Informatics, vol. 14, p. 100160, 2023.
  89. Z. Wei, J. Ren, S. S. Korreman, and J. Nijkamp, “Towards interactive deep-learning for tumour segmentation in head and neck cancer radiotherapy,” Physics and Imaging in Radiation Oncology, vol. 25, p. 100408, 2023.
  90. M. Zhuang, Z. Chen, Y. Yang, L. Kettunen, and H. Wang, “Annotation-efficient training of medical image segmentation network based on scribble guidance in difficult areas,” International Journal of Computer Assisted Radiology and Surgery, pp. 1–10, 2023.
  91. Z. Marinov, R. Stiefelhagen, and J. Kleesiek, “Guiding the guidance: A comparative analysis of user guidance signals for interactive segmentation of volumetric images,” arXiv preprint arXiv:2303.06942, 2023.
  92. C. Qu, T. Zhang, H. Qiao, J. Liu, Y. Tang, A. Yuille, and Z. Zhou, “Abdomenatlas-8k: Annotating 8,000 ct volumes for multi-organ segmentation in three weeks,” in Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023.
  93. M. A. Mazurowski, H. Dong, H. Gu, J. Yang, N. Konz, and Y. Zhang, “Segment anything model for medical image analysis: an experimental study,” arXiv preprint arXiv:2304.10517, 2023.
  94. R. Deng, C. Cui, Q. Liu, T. Yao, L. W. Remedios, S. Bao, B. A. Landman, L. E. Wheless, L. A. Coburn, K. T. Wilson, et al., “Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging,” arXiv preprint arXiv:2304.04155, 2023.
  95. S. Mohapatra, A. Gosai, and G. Schlaug, “Sam vs bet: A comparative study for brain extraction and segmentation of magnetic resonance images using deep learning,” arXiv preprint arXiv:2304.04738, vol. 2, p. 4, 2023.
  96. F. Putz, J. Grigo, T. Weissmann, P. Schubert, D. Hoefler, A. Gomaa, H. B. Tkhayat, A. Hagag, S. Lettmaier, B. Frey, et al., “The segment anything foundation model achieves favorable brain tumor autosegmentation accuracy on mri to support radiotherapy treatment planning,” arXiv preprint arXiv:2304.07875, 2023.
  97. C. Hu and X. Li, “When sam meets medical images: An investigation of segment anything model (sam) on multi-phase liver tumor segmentation,” arXiv preprint arXiv:2304.08506, 2023.
  98. T. Chen, L. Zhu, C. Ding, R. Cao, S. Zhang, Y. Wang, Z. Li, L. Sun, P. Mao, and Y. Zang, “Sam fails to segment anything?–sam-adapter: Adapting sam in underperformed scenes: Camouflage, shadow, and more,” arXiv preprint arXiv:2304.09148, 2023.
  99. J. Wu, R. Fu, H. Fang, Y. Liu, Z. Wang, Y. Xu, Y. Jin, and T. Arbel, “Medical sam adapter: Adapting segment anything model for medical image segmentation,” arXiv preprint arXiv:2304.12620, 2023.
  100. Z. Qiu, Y. Hu, H. Li, and J. Liu, “Learnable ophthalmology sam,” arXiv preprint arXiv:2304.13425, 2023.
  101. S. He, R. Bao, J. Li, P. E. Grant, and Y. Ou, “Accuracy of segment-anything model (sam) in medical image segmentation tasks,” arXiv preprint arXiv:2304.09324, 2023.
  102. P. Shi, J. Qiu, S. M. D. Abaxi, H. Wei, F. P.-W. Lo, and W. Yuan, “Generalist vision foundation models for medical imaging: A case study of segment anything model on zero-shot medical segmentation,” Diagnostics, vol. 13, no. 11, p. 1947, 2023.
  103. B. Wang, A. Aboah, Z. Zhang, and U. Bagci, “Gazesam: What you see is what you segment,” arXiv preprint arXiv:2304.13844, 2023.
  104. M. Hu, Y. Li, and X. Yang, “Skinsam: Empowering skin cancer segmentation with segment anything model,” arXiv preprint arXiv:2304.13973, 2023.
  105. A. Wang, M. Islam, M. Xu, Y. Zhang, and H. Ren, “Sam meets robotic surgery: An empirical study in robustness perspective,” arXiv preprint arXiv:2304.14674, 2023.
  106. D. Cheng, Z. Qin, Z. Jiang, S. Zhang, Q. Lao, and K. Li, “Sam on medical images: A comprehensive study on three prompt modes,” arXiv preprint arXiv:2305.00035, 2023.
  107. C. Mattjie, L. V. de Moura, R. C. Ravazio, L. S. Kupssinskü, O. Parraga, M. M. Delucis, and R. C. Barros, “Exploring the zero-shot capabilities of the segment anything model (sam) in 2d medical imaging: A comprehensive evaluation and practical guideline,” arXiv preprint arXiv:2305.00109, 2023.
  108. Y. Li, M. Hu, and X. Yang, “Polyp-sam: Transfer sam for polyp segmentation,” arXiv preprint arXiv:2305.00293, 2023.
  109. J. Wu, “Promptunet: Toward interactive medical image segmentation,” arXiv preprint arXiv:2305.10300, 2023.
  110. M. Hu, Y. Li, and X. Yang, “Breastsam: A study of segment anything model for breast tumor detection in ultrasound images,” arXiv preprint arXiv:2305.12447, 2023.
  111. D. Lee, J. Park, S. Cook, s.-j. Yoo, D. Lee, and H. Choi, “Iamsam: Image-based analysis of molecular signatures using the segment-anything model,” bioRxiv, pp. 2023–05, 2023.
  112. Y. Gao, W. Xia, D. Hu, and X. Gao, “Desam: Decoupling segment anything model for generalizable medical image segmentation,” arXiv preprint arXiv:2306.00499, 2023.
  113. C. Shen, W. Li, Y. Zhang, and X. Wang, “Temporally-extended prompts optimization for sam in interactive medical image segmentation,” arXiv preprint arXiv:2306.08958, 2023.
  114. G. Ning, H. Liang, Z. Jiang, H. Zhang, and H. Liao, “The potential of’segment anything’(sam) for universal intelligent ultrasound image guidance,” BioScience Trends, 2023.
  115. L. Zhang, Z. Liu, L. Zhang, Z. Wu, X. Yu, J. Holmes, H. Feng, H. Dai, X. Li, Q. Li, et al., “Segment anything model (sam) for radiation oncology,” arXiv preprint arXiv:2306.11730, 2023.
  116. W. Lei, X. Wei, X. Zhang, K. Li, and S. Zhang, “Medlsam: Localize and segment anything model for 3d medical images,” arXiv preprint arXiv:2306.14752, 2023.
  117. G. Deng, K. Zou, K. Ren, M. Wang, X. Yuan, S. Ying, and H. Fu, “Sam-u: Multi-box prompts triggered uncertainty estimation for reliable sam in medical image,” arXiv preprint arXiv:2307.04973, 2023.
  118. S. Gong, Y. Zhong, W. Ma, J. Li, Z. Wang, J. Zhang, P.-A. Heng, and Q. Dou, “3dsam-adapter: Holistic adaptation of sam from 2d to 3d for promptable medical image segmentation,” arXiv preprint arXiv:2306.13465, 2023.
  119. Y. Huang, X. Yang, L. Liu, H. Zhou, A. Chang, X. Zhou, R. Chen, J. Yu, J. Chen, C. Chen, et al., “Segment anything model for medical images?,” arXiv preprint arXiv:2304.14660, 2023.
  120. J. Ma and B. Wang, “Segment anything in medical images,” arXiv preprint arXiv:2304.12306, 2023.
  121. S. Roy, T. Wald, G. Koehler, M. R. Rokuss, N. Disch, J. Holzschuh, D. Zimmerer, and K. H. Maier-Hein, “Sam. md: Zero-shot medical image segmentation capabilities of the segment anything model,” arXiv preprint arXiv:2304.05396, 2023.
  122. H. Nickisch, C. Rother, P. Kohli, and C. Rhemann, “Learning an interactive segmentation system,” in Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing, pp. 274–281, 2010.
  123. J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986.
  124. W. Pedrycz, “Shadowed sets: representing and processing fuzzy sets,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 28, no. 1, pp. 103–109, 1998.
  125. D. Barbosa, T. Dietenbeck, J. Schaerer, J. D’hooge, D. Friboulet, and O. Bernard, “B-spline explicit active surfaces: an efficient framework for real-time 3-d region-based segmentation,” IEEE transactions on image processing, vol. 21, no. 1, pp. 241–251, 2011.
  126. A. Yezzi Jr, A. Tsai, and A. Willsky, “A fully global approach to image segmentation via coupled curve evolution equations,” Journal of Visual Communication and Image Representation, vol. 13, no. 1-2, pp. 195–216, 2002.
  127. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
  128. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  129. Z.-H. Zhou, “A brief introduction to weakly supervised learning,” National science review, vol. 5, no. 1, pp. 44–53, 2018.
  130. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141, 2018.
  131. Y. Y. Boykov and M.-P. Jolly, “Interactive graph cuts for optimal boundary & region segmentation of objects in nd images,” in Proceedings eighth IEEE international conference on computer vision. ICCV 2001, vol. 1, pp. 105–112, IEEE, 2001.
  132. P. Salembier and L. Garrido, “Binary partition tree as an efficient representation for image processing, segmentation, and information retrieval,” IEEE transactions on Image Processing, vol. 9, no. 4, pp. 561–576, 2000.
  133. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “Slic superpixels compared to state-of-the-art superpixel methods,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 11, pp. 2274–2282, 2012.
  134. M. Jenkinson, M. Pechaud, S. Smith, et al., “Bet2: Mr-based estimation of brain, skull and scalp surfaces,” in Eleventh annual meeting of the organization for human brain mapping, vol. 17, p. 167, Toronto., 2005.
  135. A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. R. Roth, and D. Xu, “Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images,” in International MICCAI Brainlesion Workshop, pp. 272–284, Springer, 2021.
  136. F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
  137. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  138. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  139. D. Moher, A. Liberati, J. Tetzlaff, D. G. Altman, and P. Group*, “Preferred reporting items for systematic reviews and meta-analyses: the prisma statement,” Annals of internal medicine, vol. 151, no. 4, pp. 264–269, 2009.
  140. N. Xu, B. Price, S. Cohen, J. Yang, and T. S. Huang, “Deep interactive object selection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 373–381, 2016.
  141. C. Rother, V. Kolmogorov, and A. Blake, “” grabcut” interactive foreground extraction using iterated graph cuts,” ACM transactions on graphics (TOG), vol. 23, no. 3, pp. 309–314, 2004.
  142. F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung, “A benchmark dataset and evaluation methodology for video object segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 724–732, 2016.
  143. M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (voc) challenge,” International journal of computer vision, vol. 88, pp. 303–338, 2010.
  144. B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik, “Semantic contours from inverse detectors,” in 2011 international conference on computer vision, pp. 991–998, IEEE, 2011.
  145. K. McGuinness and N. E. O’connor, “A comparative evaluation of interactive segmentation algorithms,” Pattern Recognition, vol. 43, no. 2, pp. 434–444, 2010.
  146. M. Eisenmann, A. Reinke, V. Weru, M. D. Tizabi, F. Isensee, T. J. Adler, S. Ali, V. Andrearczyk, M. Aubreville, U. Baid, et al., “Why is the winner the best?,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19955–19966, 2023.
  147. S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load index): Results of empirical and theoretical research,” in Advances in psychology, vol. 52, pp. 139–183, Elsevier, 1988.
  148. J. Brooke, “Sus: a “quick and dirty’usability,” Usability evaluation in industry, vol. 189, no. 3, pp. 189–194, 1996.
  149. A. Diaz-Pinto, S. Alle, V. Nath, Y. Tang, A. Ihsani, M. Asad, F. Pérez-García, P. Mehta, W. Li, M. Flores, et al., “Monai label: A framework for ai-assisted interactive labeling of 3d medical images,” arXiv preprint arXiv:2203.12362, 2022.
  150. K. A. Philbrick, A. D. Weston, Z. Akkus, T. L. Kline, P. Korfiatis, T. Sakinis, P. Kostandy, A. Boonrod, A. Zeinoddini, N. Takahashi, et al., “Ril-contour: a medical imaging dataset annotation tool for and with deep learning,” Journal of digital imaging, vol. 32, pp. 571–581, 2019.
  151. P. D. Lösel, T. van de Kamp, A. Jayme, A. Ershov, T. Faragó, O. Pichler, N. Tan Jerome, N. Aadepu, S. Bremer, S. A. Chilingaryan, et al., “Introducing biomedisa as an open-source online platform for biomedical image segmentation,” Nature communications, vol. 11, no. 1, p. 5577, 2020.
  152. S. Mahadevan, P. Voigtlaender, and B. Leibe, “Iteratively trained interactive segmentation,” in British Machine Vision Conference (BMVC), 2018.
  153. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International journal of computer vision, vol. 1, no. 4, pp. 321–331, 1988.
  154. S. Gatidis, M. Früh, M. Fabritius, S. Gu, K. Nikolaou, C. La Fougère, J. Ye, J. He, Y. Peng, L. Bi, et al., “The autopet challenge: Towards fully automated lesion segmentation in oncologic pet/ct imaging,” 2023.
  155. F. Zhao and X. Xie, “An overview of interactive medical image segmentation,” Annals of the BMVA, vol. 2013, no. 7, pp. 1–22, 2013.
  156. S. D. Olabarriaga and A. W. Smeulders, “Interaction in the segmentation of medical images: A survey,” Medical image analysis, vol. 5, no. 2, pp. 127–142, 2001.
  157. H. Ramadan, C. Lachqar, and H. Tairi, “A survey of recent interactive image segmentation methods,” Computational visual media, vol. 6, pp. 355–384, 2020.
  158. Ç. Kaymak and A. Uçar, “A brief survey and an application of semantic image segmentation for autonomous driving,” Handbook of Deep Learning Applications, pp. 161–200, 2019.
  159. D. Tabernik, S. Šela, J. Skvarč, and D. Skočaj, “Segmentation-based deep-learning approach for surface-defect detection,” Journal of Intelligent Manufacturing, vol. 31, no. 3, pp. 759–776, 2020.
  160. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical image analysis, vol. 42, pp. 60–88, 2017.
  161. M. Bakator and D. Radosav, “Deep learning and medical diagnosis: A review of literature,” Multimodal Technologies and Interaction, vol. 2, no. 3, p. 47, 2018.
  162. B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, et al., “The multimodal brain tumor image segmentation benchmark (brats),” IEEE transactions on medical imaging, vol. 34, no. 10, pp. 1993–2024, 2014.
  163. M. Antonelli, A. Reinke, S. Bakas, K. Farahani, A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze, O. Ronneberger, R. M. Summers, et al., “The medical segmentation decathlon,” Nature communications, vol. 13, no. 1, p. 4128, 2022.
  164. P. Bilic, P. Christ, H. B. Li, E. Vorontsov, A. Ben-Cohen, G. Kaissis, A. Szeskin, C. Jacobs, G. E. H. Mamani, G. Chartrand, et al., “The liver tumor segmentation benchmark (lits),” Medical Image Analysis, vol. 84, p. 102680, 2023.
  165. L. Maier-Hein, B. Menze, et al., “Metrics reloaded: Pitfalls and recommendations for image analysis validation,” arXiv. org, no. 2206.01653, 2022.
  166. P. Krähenbühl and V. Koltun, “Efficient inference in fully connected crfs with gaussian edge potentials,” Advances in neural information processing systems, vol. 24, 2011.
  167. A. E. Lefohn, J. E. Cates, and R. T. Whitaker, “Interactive, gpu-based level sets for 3d segmentation,” in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2003: 6th International Conference, Montréal, Canada, November 15-18, 2003. Proceedings 6, pp. 564–572, Springer, 2003.
  168. C. Sommer, C. Straehle, U. Koethe, and F. A. Hamprecht, “Ilastik: Interactive learning and segmentation toolkit,” in 2011 IEEE international symposium on biomedical imaging: From nano to macro, pp. 230–233, IEEE, 2011.
  169. P. A. Yushkevich, Y. Gao, and G. Gerig, “Itk-snap: An interactive tool for semi-automatic segmentation of multi-modality biomedical images,” in 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 3342–3345, IEEE, 2016.
  170. A. Reinke, M. D. Tizabi, M. Baumgartner, M. Eisenmann, D. Heckmann-Nötzel, A. E. Kavur, T. Rädsch, C. H. Sudre, L. Acion, M. Antonelli, et al., “Understanding metric-related pitfalls in image analysis validation,” ArXiv, 2023.
  171. S. Nikolov, S. Blackwell, A. Zverovitch, R. Mendes, M. Livne, J. De Fauw, Y. Patel, C. Meyer, H. Askham, B. Romera-Paredes, et al., “Clinically applicable segmentation of head and neck anatomy for radiotherapy: deep learning algorithm development and validation study,” Journal of medical Internet research, vol. 23, no. 7, p. e26151, 2021.
  172. R. Li and X. Chen, “An efficient interactive multi-label segmentation tool for 2d and 3d medical images using fully connected conditional random field,” Computer Methods and Programs in Biomedicine, vol. 213, p. 106534, 2022.
  173. I. Wolf, M. Vetter, I. Wegner, T. Böttger, M. Nolden, M. Schöbinger, M. Hastenteufel, T. Kunert, and H.-P. Meinzer, “The medical imaging interaction toolkit,” Medical image analysis, vol. 9, no. 6, pp. 594–604, 2005.
  174. G. Wang, X. Luo, R. Gu, S. Yang, Y. Qu, S. Zhai, Q. Zhao, K. Li, and S. Zhang, “Pymic: A deep learning toolkit for annotation-efficient medical image segmentation,” Computer Methods and Programs in Biomedicine, vol. 231, p. 107398, 2023.
  175. L. Castrejon, K. Kundu, R. Urtasun, and S. Fidler, “Annotating object instances with a polygon-rnn,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5230–5238, 2017.
  176. K.-K. Maninis, S. Caelles, J. Pont-Tuset, and L. Van Gool, “Deep extreme cut: From extreme points to object segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 616–625, 2018.
  177. K. Sofiiuk, I. Petrov, O. Barinova, and A. Konushin, “f-brs: Rethinking backpropagating refinement for interactive segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8623–8632, 2020.
  178. W.-D. Jang and C.-S. Kim, “Interactive image segmentation via backpropagating refinement scheme,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5297–5306, 2019.
  179. Z. Li, Q. Chen, and V. Koltun, “Interactive image segmentation with latent diversity,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 577–585, 2018.
  180. A. S. A. Khaizi, R. A. M. Rosidi, H.-S. Gan, and K. A. Sayuti, “A mini review on the design of interactive tool for medical image segmentation,” in 2017 International Conference on Engineering Technology and Technopreneurship (ICE2T), pp. 1–5, 2017.
Citations (8)

Summary

We haven't generated a summary for this paper yet.