2000 character limit reached
When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation (2304.08506v6)
Published 17 Apr 2023 in eess.IV and cs.CV
Abstract: Learning to segmentation without large-scale samples is an inherent capability of human. Recently, Segment Anything Model (SAM) performs the significant zero-shot image segmentation, attracting considerable attention from the computer vision community. Here, we investigate the capability of SAM for medical image analysis, especially for multi-phase liver tumor segmentation (MPLiTS), in terms of prompts, data resolution, phases. Experimental results demonstrate that there might be a large gap between SAM and expected performance. Fortunately, the qualitative results show that SAM is a powerful annotation tool for the community of interactive medical image segmentation.
- On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
- Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging. arXiv preprint arXiv:2304.04155, 2023.
- Segment anything model (sam) meets glass: Mirror and transparent objects cannot be easily detected. arXiv preprint arXiv:2305.00278, 2023.
- Segment anything. arXiv preprint arXiv:2304.02643, 2023.
- elastix: A toolbox for intensity-based medical image registration. IEEE Transactions on Medical Imaging, 29(1):196–205, 2010. doi: 10.1109/TMI.2009.2035616.
- U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241. Springer, 2015. doi: 10.1007/978-3-319-24574-4_28.
- A survey on segment anything model (sam): Vision foundation model meets prompt engineering. 2023a.
- Attack-sam: Towards evaluating adversarial robustness of segment anything model. arXiv preprint arXiv:2305.00866, 2023b.
- A comprehensive survey on segment anything model for vision and beyond. arXiv preprint arXiv:2305.08196, 2023c.
- Chuanfei Hu (6 papers)
- Tianyi Xia (2 papers)
- Shenghong Ju (27 papers)
- Xinde Li (8 papers)