Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions (2401.03495v1)
Abstract: Due to the inherent flexibility of prompting, foundation models have emerged as the predominant force in the fields of natural language processing and computer vision. The recent introduction of the Segment Anything Model (SAM) signifies a noteworthy expansion of the prompt-driven paradigm into the domain of image segmentation, thereby introducing a plethora of previously unexplored capabilities. However, the viability of its application to medical image segmentation remains uncertain, given the substantial distinctions between natural and medical images. In this work, we provide a comprehensive overview of recent endeavors aimed at extending the efficacy of SAM to medical image segmentation tasks, encompassing both empirical benchmarking and methodological adaptations. Additionally, we explore potential avenues for future research directions in SAM's role within medical image segmentation. While direct application of SAM to medical image segmentation does not yield satisfactory performance on multi-modal and multi-target medical datasets so far, numerous insights gleaned from these efforts serve as valuable guidance for shaping the trajectory of foundational models in the realm of medical image analysis. To support ongoing research endeavors, we maintain an active repository that contains an up-to-date paper list and a succinct summary of open-source projects at https://github.com/YichiZhang98/SAM4MIS.
- G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical image analysis, vol. 42, pp. 60–88, 2017.
- S. K. Zhou, H. Greenspan, C. Davatzikos, J. S. Duncan, B. Van Ginneken, A. Madabhushi, J. L. Prince, D. Rueckert, and R. M. Summers, “A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises,” Proceedings of the IEEE, vol. 109, no. 5, pp. 820–838, 2021.
- J. Ma, Y. Zhang, S. Gu, C. Zhu, C. Ge, Y. Zhang, X. An, C. Wang, Q. Wang, X. Liu, S. Cao, Q. Zhang, S. Liu, Y. Wang, Y. Li, J. He, and X. Yang, “Abdomenct-1k: Is abdominal organ segmentation a solved problem?” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 6695–6714, 2022.
- M. Antonelli, A. Reinke, S. Bakas, K. Farahani, A. Kopp-Schneider, B. A. Landman, G. Litjens, B. Menze, O. Ronneberger, R. M. Summers et al., “The medical segmentation decathlon,” Nature communications, vol. 13, no. 1, p. 4128, 2022.
- J. Wasserthal, H.-C. Breit, M. T. Meyer, M. Pradella, D. Hinck, A. W. Sauter, T. Heye, D. T. Boll, J. Cyriac, S. Yang et al., “Totalsegmentator: Robust segmentation of 104 anatomic structures in ct images,” Radiology: Artificial Intelligence, vol. 5, no. 5, 2023.
- X. Wang, G. Chen, G. Qian, P. Gao, X.-Y. Wei, Y. Wang, Y. Tian, and W. Gao, “Large-scale multi-modal pre-trained models: A comprehensive survey,” Machine Intelligence Research, pp. 1–36, 2023.
- P. P. Liang, A. Zadeh, and L.-P. Morency, “Foundations and recent trends in multimodal machine learning: Principles, challenges, and open questions,” arXiv preprint arXiv:2209.03430, 2022.
- M. Awais, M. Naseer, S. Khan, R. M. Anwer, H. Cholakkal, M. Shah, M.-H. Yang, and F. S. Khan, “Foundational models defining a new era in vision: A survey and outlook,” arXiv preprint arXiv:2307.13721, 2023.
- J. Ma and B. Wang, “Towards foundation models of biological image segmentation,” Nature Methods, vol. 20, no. 7, pp. 953–955, 2023.
- A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
- M. A. Mazurowski, H. Dong, H. Gu, J. Yang, N. Konz, and Y. Zhang, “Segment anything model for medical image analysis: an experimental study,” Medical Image Analysis, vol. 89, p. 102918, 2023.
- Y. Huang, X. Yang, L. Liu, H. Zhou, A. Chang, X. Zhou, R. Chen, J. Yu, J. Chen, C. Chen et al., “Segment anything model for medical images?” Medical Image Analysis, p. 103061, 2023.
- J. Ma and B. Wang, “Segment anything in medical images,” arXiv preprint arXiv:2304.12306, 2023.
- J. Wu, R. Fu, H. Fang, Y. Liu, Z. Wang, Y. Xu, Y. Jin, and T. Arbel, “Medical sam adapter: Adapting segment anything model for medical image segmentation,” arXiv preprint arXiv:2304.12620, 2023.
- Y. Gao, W. Xia, D. Hu, and X. Gao, “Desam: Decoupling segment anything model for generalizable medical image segmentation,” arXiv preprint arXiv:2306.00499, 2023.
- S. Gong, Y. Zhong, W. Ma, J. Li, Z. Wang, J. Zhang, P.-A. Heng, and Q. Dou, “3dsam-adapter: Holistic adaptation of sam from 2d to 3d for promptable medical image segmentation,” arXiv preprint arXiv:2306.13465, 2023.
- W. Lei, X. Wei, X. Zhang, K. Li, and S. Zhang, “Medlsam: Localize and segment anything model for 3d medical images,” arXiv preprint arXiv:2306.14752, 2023.
- W. Yue, J. Zhang, K. Hu, Y. Xia, J. Luo, and Z. Wang, “Surgicalsam: Efficient class promptable surgical instrument segmentation,” arXiv preprint arXiv:2308.08746, 2023.
- J. Cheng, J. Ye, Z. Deng, J. Chen, T. Li, H. Wang, Y. Su, Z. Huang, J. Chen, L. Jiang et al., “Sam-med2d,” arXiv preprint arXiv:2308.16184, 2023.
- N.-T. Bui, D.-H. Hoang, M.-T. Tran, and N. Le, “Sam3d: Segment anything model in volumetric medical images,” arXiv preprint arXiv:2309.03493, 2023.
- X. Lin, Y. Xiang, L. Zhang, X. Yang, Z. Yan, and L. Yu, “Samus: Adapting segment anything model for clinically-friendly and generalizable ultrasound image segmentation,” arXiv preprint arXiv:2309.06824, 2023.
- C. Chen, J. Miao, D. Wu, Z. Yan, S. Kim, J. Hu, A. Zhong, Z. Liu, L. Sun, X. Li et al., “Ma-sam: Modality-agnostic sam adaptation for 3d medical image segmentation,” arXiv preprint arXiv:2309.08842, 2023.
- Y. Li, B. Jing, X. Feng, Z. Li, Y. He, J. Wang, and Y. Zhang, “nnsam: Plug-and-play segment anything model improves nnunet performance,” arXiv preprint arXiv:2309.16967, 2023.
- H. Wang, S. Guo, J. Ye, Z. Deng, J. Cheng, T. Li, J. Chen, Y. Su, Z. Huang, Y. Shen, B. Fu et al., “Sam-med3d,” arXiv preprint arXiv:2310.15161, 2023.
- H. Li, H. Liu, D. Hu, J. Wang, and I. Oguz, “Promise: Prompt-driven 3d medical image segmentation using pretrained image foundation models,” arXiv preprint arXiv:2310.19721, 2023.
- Y. Du, F. Bai, T. Huang, and B. Zhao, “Segvol: Universal and interactive volumetric medical image segmentation,” arXiv preprint arXiv:2311.13385, 2023.
- H. E. Wong, M. Rakic, J. Guttag, and A. V. Dalca, “Scribbleprompt: Fast and flexible interactive segmentation for any medical image,” arXiv preprint arXiv:2312.07381, 2023.
- Y. Zhang, Y. Cheng, and Y. Qi, “Semisam: Exploring sam for enhancing semi-supervised medical image segmentation with extremely limited annotations,” arXiv preprint arXiv:2312.06316, 2023.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- OpenAI, “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning. PMLR, 2021, pp. 8748–8763.
- C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. Le, Y.-H. Sung, Z. Li, and T. Duerig, “Scaling up visual and vision-language representation learning with noisy text supervision,” in International Conference on Machine Learning. PMLR, 2021, pp. 4904–4916.
- A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” in International Conference on Machine Learning. PMLR, 2021, pp. 8821–8831.
- Y. Liu, T. Han, S. Ma, J. Zhang, Y. Yang, J. Tian, H. He, A. Li, M. He, Z. Liu et al., “Summary of chatgpt/gpt-4 research and perspective towards the future of large language models,” arXiv preprint arXiv:2304.01852, 2023.
- H. Yi, Z. Qin, Q. Lao, W. Xu, Z. Jiang, D. Wang, S. Zhang, and K. Li, “Towards general purpose medical ai: Continual learning medical foundation model,” arXiv preprint arXiv:2303.06580, 2023.
- S. Zhang and D. N. Metaxas, “On the challenges and perspectives of foundation models for medical image analysis,” arXiv preprint arXiv:2306.05705, 2023.
- X. Li, L. Zhang, Z. Wu, Z. Liu, L. Zhao, Y. Yuan, J. Liu, G. Li, D. Zhu, P. Yan et al., “Artificial general intelligence for medical imaging,” arXiv preprint arXiv:2306.05480, 2023.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural Information Processing Systems, vol. 30, 2017.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2020.
- K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000–16 009.
- M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng, “Fourier features let networks learn high frequency functions in low dimensional domains,” Advances in Neural Information Processing Systems, vol. 33, pp. 7537–7547, 2020.
- T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999–3007, 2017.
- F. Milletarì, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571, 2016.
- S. Roy, T. Wald, G. Koehler, M. R. Rokuss, N. Disch, J. Holzschuh, D. Zimmerer, and K. H. Maier-Hein, “Sam. md: Zero-shot medical image segmentation capabilities of the segment anything model,” arXiv preprint arXiv:2304.05396, 2023.
- Y. Ji, H. Bai, C. Ge, J. Yang, Y. Zhu, R. Zhang, Z. Li, L. Zhanng, W. Ma, X. Wan et al., “Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation,” Advances in Neural Information Processing Systems, vol. 35, pp. 36 722–36 732, 2022.
- C. Hu and X. Li, “When sam meets medical images: An investigation of segment anything model (sam) on multi-phase liver tumor segmentation,” arXiv preprint arXiv:2304.08506, 2023.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
- S. Mohapatra, A. Gosai, and G. Schlaug, “Sam vs bet: A comparative study for brain extraction and segmentation of magnetic resonance images using deep learning,” arXiv preprint arXiv:2304.04738, 2023.
- P. Zhang and Y. Wang, “Segment anything model for brain tumor segmentation,” arXiv preprint arXiv:2309.08434, 2023.
- R. Deng, C. Cui, Q. Liu, T. Yao, L. W. Remedios, S. Bao, B. A. Landman, L. E. Wheless, L. A. Coburn, K. T. Wilson et al., “Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging,” arXiv preprint arXiv:2304.04155, 2023.
- T. Zhou, Y. Zhang, Y. Zhou, Y. Wu, and C. Gong, “Can sam segment polyps?” arXiv preprint arXiv:2304.07583, 2023.
- J. Silva, A. Histace, O. Romain, X. Dray, and B. Granado, “Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer,” International journal of computer assisted radiology and surgery, vol. 9, pp. 283–293, 2014.
- J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, D. Gil, C. Rodríguez, and F. Vilariño, “Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” Computerized medical imaging and graphics, vol. 43, pp. 99–111, 2015.
- N. Tajbakhsh, S. R. Gurudu, and J. Liang, “Automated polyp detection in colonoscopy videos using shape and context information,” IEEE transactions on medical imaging, vol. 35, no. 2, pp. 630–644, 2015.
- D. Vázquez, J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, A. M. López, A. Romero, M. Drozdzal, A. Courville et al., “A benchmark for endoluminal scene segmentation of colonoscopy images,” Journal of healthcare engineering, vol. 2017, 2017.
- D. Jha, P. H. Smedsrud, M. A. Riegler, P. Halvorsen, T. de Lange, D. Johansen, and H. D. Johansen, “Kvasir-seg: A segmented polyp dataset,” in MultiMedia Modeling: 26th International Conference, MMM 2020, Daejeon, South Korea, January 5–8, 2020, Proceedings, Part II 26. Springer, 2020, pp. 451–462.
- A.-C. Wang, M. Islam, M. Xu, Y. Zhang, and H. Ren, “Sam meets robotic surgery: An empirical study in robustness perspective,” arXiv preprint arXiv:2304.14674, 2023.
- M. Allan, A. Shvets, T. Kurmann, Z. Zhang, R. Duggal, Y.-H. Su, N. Rieke, I. Laina, N. Kalavakonda, S. Bodenstedt et al., “2017 robotic instrument segmentation challenge,” arXiv preprint arXiv:1902.06426, 2019.
- M. Allan, S. Kondo, S. Bodenstedt, S. Leger, R. Kadkhodamohammadi, I. Luengo, F. Fuentes, E. Flouty, A. Mohammed, M. Pedersen et al., “2018 robotic scene segmentation challenge,” arXiv preprint arXiv:2001.11190, 2020.
- S. He, R. Bao, J. Li, P. E. Grant, and Y. Ou, “Accuracy of segment-anything model (sam) in medical image segmentation tasks,” arXiv preprint arXiv:2304.09324, 2023.
- D. Cheng, Z. Qin, Z. Jiang, S. Zhang, Q. Lao, and K. Li, “Sam on medical images: A comprehensive study on three prompt modes,” arXiv preprint arXiv:2305.00035, 2023.
- L. Zhang, Z. Liu, L. Zhang, Z. Wu, X. Yu, J. Holmes, H. Feng, H. Dai, X. Li, Q. Li et al., “Segment anything model (sam) for radiation oncology,” arXiv preprint arXiv:2306.11730, 2023.
- G.-P. Ji, D.-P. Fan, P. Xu, M.-M. Cheng, B. Zhou, and L. V. Gool, “Sam struggles in concealed scenes - empirical study on "segment anything",” arXiv preprint arXiv:2304.06022, 2023.
- W. Ji, J. Li, Q. Bi, W. Li, and L. Cheng, “Segment anything is not always perfect: An investigation of sam on different real-world applications,” arXiv preprint arXiv:2304.05750, 2023.
- M. Hu, Y. Li, and X. Yang, “Skinsam: Empowering skin cancer segmentation with segment anything model,” arXiv preprint arXiv:2304.13973, 2023.
- Y. Li, M. Hu, and X. Yang, “Polyp-sam: Transfer sam for polyp segmentation,” arXiv preprint arXiv:2305.00293, 2023.
- E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “Lora: Low-rank adaptation of large language models,” arXiv preprint arXiv:2106.09685, 2021.
- K. Zhang and D. Liu, “Customized segment anything model for medical image segmentation,” arXiv preprint arXiv:2304.13785, 2023.
- W. Feng, L. Zhu, and L. Yu, “Cheap lunch for medical image segmentation by fine-tuning sam on few exemplars,” arXiv preprint arXiv:2308.14133, 2023.
- J. N. Paranjape, N. G. Nair, S. Sikder, S. S. Vedula, and V. M. Patel, “Adaptivesam: Towards efficient tuning of sam for surgical scene segmentation,” arXiv preprint arXiv:2308.03726, 2023.
- S. Pandey, K.-F. Chen, and E. B. Dam, “Comprehensive multimodal segmentation in medical imaging: Combining yolov8 with sam and hq-sam models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 2592–2598.
- D. Anand, V. Singhal, D. D. Shanbhag, S. KS, U. Patil, C. Bhushan, K. Manickam, D. Gui, R. Mullick, A. Gopal et al., “One-shot localization and segmentation of medical images with foundation models,” arXiv preprint arXiv:2310.18642, 2023.
- T. Shaharabany, A. Dahan, R. Giryes, and L. Wolf, “Autosam: Adapting sam to medical images by overloading the prompt encoder,” arXiv preprint arXiv:2306.06370, 2023.
- C. Cui, R. Deng, Q. Liu, T. Yao, S. Bao, L. W. Remedios, Y. Tang, and Y. Huo, “All-in-sam: from weak annotation to pixel-wise nuclei segmentation with prompt-based finetuning,” arXiv preprint arXiv:2307.00290, 2023.
- T. Chen, L. Zhu, C. Ding, R. Cao, S. Zhang, Y. Wang, Z. Li, L. Sun, P. Mao, and Y. Zang, “Sam fails to segment anything?–sam-adapter: Adapting sam in underperformed scenes: Camouflage, shadow, and more,” arXiv preprint arXiv:2304.09148, 2023.
- Y. Xu, J. Tang, A. Men, and Q. Chen, “Eviprompt: A training-free evidential prompt generation method for segment anything model in medical images,” arXiv preprint arXiv:2311.06400, 2023.
- G. Deng, K. Zou, K. Ren, M. Wang, X. Yuan, S. Ying, and H. Fu, “Sam-u: Multi-box prompts triggered uncertainty estimation for reliable sam in medical image,” arXiv preprint arXiv:2307.04973, 2023.
- Y. Zhang, S. Hu, C. Jiang, Y. Cheng, and Y. Qi, “Segment anything model with uncertainty rectification for auto-prompting medical image segmentation,” arXiv preprint arXiv:2311.10529, 2023.
- J. Zhang, K. Ma, S. Kapse, J. Saltz, M. Vakalopoulou, P. Prasanna, and D. Samaras, “Sam-path: A segment anything model for semantic segmentation in digital pathology,” arXiv preprint arXiv:2307.09570, 2023.
- S. Chai, R. K. Jain, S. Teng, J. Liu, Y. Li, T. Tateyama, and Y.-w. Chen, “Ladder fine-tuning approach for sam integrating complementary network,” arXiv preprint arXiv:2306.12737, 2023.
- Y. Zhang, T. Zhou, P. Liang, and D. Z. Chen, “Input augmentation with sam: Boosting medical image segmentation with segmentation foundation model,” arXiv preprint arXiv:2304.11332, 2023.
- Y. Zhang, T. Zhou, S. Wang, Y. Wu, P. Gu, and D. Z. Chen, “Samdsk: Combining segment anything model with domain-specific knowledge for semi-supervised learning in medical image segmentation,” arXiv preprint arXiv:2308.13759, 2023.
- N. Li, L. Xiong, W. Qiu, Y. Pan, Y. Luo, and Y. Zhang, “Segment anything model for semi-supervised medical image segmentation via selecting reliable pseudo-labels,” in International Conference on Neural Information Processing, 2023.
- X. Li, R. Deng, Y. Tang, S. Bao, H. Yang, and Y. Huo, “Leverage weakly annotation to pixel-wise annotation via zero-shot segment anything model for molecular-empowered learning,” arXiv preprint arXiv:2308.05785, 2023.
- Y. Zhang, Q. Liao, L. Ding, and J. Zhang, “Bridging 2d and 3d segmentation networks for computation-efficient volumetric medical image segmentation: An empirical study of 2.5 d solutions,” Computerized Medical Imaging and Graphics, p. 102088, 2022.
- F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
- J. Ye, J. Cheng, J. Chen, Z. Deng, T. Li, H. Wang, Y. Su, Z. Huang, J. Chen, L. Jiang et al., “Sa-med2d-20m dataset: Segment anything in 2d medical imaging with 20 million masks,” arXiv preprint arXiv:2311.11969, 2023.
- M. Moor, O. Banerjee, Z. F. H. Abad, H. M. Krumholz, J. Leskovec, E. J. Topol, and P. Rajpurkar, “Foundation models for generalist medical artificial intelligence,” Nature, vol. 616, pp. 259–265, 2023.
- M. J. Willemink, H. R. Roth, and V. Sandfort, “Toward foundational deep learning models for medical imaging in the new era of transformer networks.” Radiology. Artificial intelligence, vol. 46, p. 210284, 2022.
- Z. Zhao, Y. Zhang, C. Wu, X. Zhang, Y. Zhang, Y. Wang, and W. Xie, “One model to rule them all: Towards universal segmentation for medical images with text prompts,” arXiv preprint arXiv:2312.17183, 2023.
- R. Jiao, Y. Zhang, L. Ding, R. Cai, and J. Zhang, “Learning with limited annotations: A survey on deep semi-supervised learning for medical image segmentation,” arXiv preprint arXiv:2207.14191, 2022.
- N. Tajbakhsh, L. Jeyaseelan, Q. Li, J. N. Chiang, Z. Wu, and X. Ding, “Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation,” Medical Image Analysis, vol. 63, p. 101693, 2020.
- C. Qu, T. Zhang, H. Qiao, J. Liu, Y. Tang, A. L. Yuille, and Z. Zhou, “Segment anything,” arXiv preprint arXiv:2305.09666, 2023.
- Y. Liu, J. Zhang, Z. She, A. Kheradmand, and M. Armand, “Samm (segment any medical model): A 3d slicer integration to sam,” arXiv preprint arXiv:2304.05622, 2023.
- A. Fedorov, R. R. Beichel, J. Kalpathy-Cramer, J. Finet, J.-C. Fillion-Robin, S. Pujol, C. Bauer, D. L. Jennings, F. M. Fennessy, M. Sonka, J. M. Buatti, S. R. Aylward, J. V. Miller, S. D. Pieper, and R. Kikinis, “3d slicer as an image computing platform for the quantitative imaging network.” Magnetic resonance imaging, vol. 30 9, pp. 1323–41, 2012.
- C. Wang, D. Li, S. Wang, C. Zhang, Y. Wang, Y. Liu, and G. Yang, “samMed𝑠𝑎superscript𝑚𝑀𝑒𝑑sam^{Med}italic_s italic_a italic_m start_POSTSUPERSCRIPT italic_M italic_e italic_d end_POSTSUPERSCRIPT: A medical image annotation framework based on large vision model,” arXiv preprint arXiv:2307.05617, 2023.
- C. Shen, W. Li, Y. Zhang, and X. Wang, “Temporally-extended prompts optimization for sam in interactive medical image segmentation,” arXiv preprint arXiv:2306.08958, 2023.
- Z. Huang, H. Liu, H. Zhang, F. Xing, A. Laine, E. Angelini, C. Hendon, and Y. Gan, “Push the boundary of sam: A pseudo-label correction framework for medical segmentation,” arXiv preprint arXiv:2308.00883, 2023.
- H. Ning, C. Wang, X. Chen, and S. Li, “An accurate and efficient neural network for octa vessel segmentation and a new dataset,” arXiv preprint arXiv:2309.09483, 2023.
- K. Zhang and X. Zhuang, “Zscribbleseg: Zen and the art of scribble supervised medical image segmentation,” arXiv preprint arXiv:2301.04882, 2023.
- J. H. Moon, H. Lee, W. Shin, Y.-H. Kim, and E. Choi, “Multi-modal understanding and generation for medical images and text via vision-language pre-training,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 12, pp. 6070–6080, 2022.
- B. Wang, A. Aboah, Z. Zhang, and U. Bagci, “Gazesam: What you see is what you segment,” arXiv preprint arXiv:2304.13844, 2023.
- G. Ning, H. Liang, Z. Jiang, H. Zhang, and H. Liao, “The potential of ’segment anything’ (sam) for universal intelligent ultrasound image guidance.” Bioscience trends, 2023.
- H. Jiang, M. Gao, Z. Liu, C. Tang, X. Zhang, S. Jiang, W. Yuan, and J. Liu, “Glanceseg: Real-time microaneurysm lesion segmentation with gaze-map-guided foundation model for early detection of diabetic retinopathy,” arXiv preprint arXiv:2311.08075, 2023.
- Z. Song, Z. Qi, X. Wang, X. Zhao, Z. Shen, S. Wang, M. Fei, Z. Wang, D. Zang, D. Chen et al., “Uni-coal: A unified framework for cross-modality synthesis and super-resolution of mr images,” arXiv preprint arXiv:2311.08225, 2023.
- G. Lappas, N. Staut, N. G. Lieuwes, R. Biemans, C. J. Wolfs, S. J. van Hoof, L. J. Dubois, and F. Verhaegen, “Inter-observer variability of organ contouring for preclinical studies with cone beam computed tomography imaging,” Physics and Imaging in Radiation Oncology, vol. 21, pp. 11 – 17, 2022.