Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation (2304.08506v6)

Published 17 Apr 2023 in eess.IV and cs.CV

Abstract: Learning to segmentation without large-scale samples is an inherent capability of human. Recently, Segment Anything Model (SAM) performs the significant zero-shot image segmentation, attracting considerable attention from the computer vision community. Here, we investigate the capability of SAM for medical image analysis, especially for multi-phase liver tumor segmentation (MPLiTS), in terms of prompts, data resolution, phases. Experimental results demonstrate that there might be a large gap between SAM and expected performance. Fortunately, the qualitative results show that SAM is a powerful annotation tool for the community of interactive medical image segmentation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
  2. Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging. arXiv preprint arXiv:2304.04155, 2023.
  3. Segment anything model (sam) meets glass: Mirror and transparent objects cannot be easily detected. arXiv preprint arXiv:2305.00278, 2023.
  4. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
  5. elastix: A toolbox for intensity-based medical image registration. IEEE Transactions on Medical Imaging, 29(1):196–205, 2010. doi: 10.1109/TMI.2009.2035616.
  6. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241. Springer, 2015. doi: 10.1007/978-3-319-24574-4_28.
  7. A survey on segment anything model (sam): Vision foundation model meets prompt engineering. 2023a.
  8. Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866, 2023b.
  9. A comprehensive survey on segment anything model for vision and beyond. arXiv preprint arXiv:2305.08196, 2023c.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chuanfei Hu (6 papers)
  2. Tianyi Xia (2 papers)
  3. Shenghong Ju (27 papers)
  4. Xinde Li (8 papers)
Citations (66)

Summary

We haven't generated a summary for this paper yet.