SAM-Med3D: Towards General-purpose Segmentation Models for Volumetric Medical Images (2310.15161v3)
Abstract: Existing volumetric medical image segmentation models are typically task-specific, excelling at specific target but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this paper, we introduce SAM-Med3D for general-purpose segmentation on volumetric medical images. Given only a few 3D prompt points, SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities. To achieve this, we gather and process a large-scale 3D medical image dataset, SA-Med3D-140K, from a blend of public sources and licensed private datasets. This dataset includes 22K 3D images and 143K corresponding 3D masks. Then SAM-Med3D, a promptable segmentation model characterized by the fully learnable 3D structure, is trained on this dataset using a two-stage procedure and exhibits impressive performance on both seen and unseen segmentation targets. We comprehensively evaluate SAM-Med3D on 16 datasets covering diverse medical scenarios, including different anatomical structures, modalities, targets, and zero-shot transferability to new/unseen tasks. The evaluation shows the efficiency and efficacy of SAM-Med3D, as well as its promising application to diverse downstream tasks as a pre-trained model. Our approach demonstrates that substantial medical resources can be utilized to develop a general-purpose medical AI for various potential applications. Our dataset, code, and models are available at https://github.com/uni-medical/SAM-Med3D.
- Haoyu Wang (309 papers)
- Sizheng Guo (1 paper)
- Jin Ye (38 papers)
- Zhongying Deng (25 papers)
- Junlong Cheng (9 papers)
- Tianbin Li (20 papers)
- Jianpin Chen (4 papers)
- Yanzhou Su (26 papers)
- Ziyan Huang (18 papers)
- Yiqing Shen (53 papers)
- Bin Fu (74 papers)
- Shaoting Zhang (133 papers)
- Junjun He (78 papers)
- Yu Qiao (563 papers)