Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SAM3D: Segment Anything Model in Volumetric Medical Images (2309.03493v4)

Published 7 Sep 2023 in eess.IV and cs.CV

Abstract: Image segmentation remains a pivotal component in medical image analysis, aiding in the extraction of critical information for precise diagnostic practices. With the advent of deep learning, automated image segmentation methods have risen to prominence, showcasing exceptional proficiency in processing medical imagery. Motivated by the Segment Anything Model (SAM)-a foundational model renowned for its remarkable precision and robust generalization capabilities in segmenting 2D natural images-we introduce SAM3D, an innovative adaptation tailored for 3D volumetric medical image analysis. Unlike current SAM-based methods that segment volumetric data by converting the volume into separate 2D slices for individual analysis, our SAM3D model processes the entire 3D volume image in a unified approach. Extensive experiments are conducted on multiple medical image datasets to demonstrate that our network attains competitive results compared with other state-of-the-art methods in 3D medical segmentation tasks while being significantly efficient in terms of parameters. Code and checkpoints are available at https://github.com/UARK-AICV/SAM3D.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in MICCAI, 2015.
  2. “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature Methods, vol. 18, 2021.
  3. “TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation,” arXiv preprint arXiv:2102.04306, 2021.
  4. “Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation,” in ECCVW, 2022.
  5. “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  6. “UNETR: Transformers for 3D Medical Image Segmentation,” in WACV, 2022.
  7. “Hiformer: Hierarchical multi-scale representations using transformers for medical image segmentation,” in WACV, 2023.
  8. “Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: Is the problem solved?,” IEEE Transactions on Medical Imaging, 2018.
  9. “Multi-Atlas Labeling Beyond the Cranial Vault - Workshop and Challenge,” in MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, 2015.
  10. “A large annotated medical image dataset for the development and evaluation of segmentation algorithms,” arXiv preprint arXiv:1902.09063, 2019.
  11. “MISSFormer: An Effective Medical Image Segmentation Transformer,” arXiv preprint arXiv:2109.07162, 2021.
  12. “TransDeepLab: Convolution-Free Transformer-based DeepLab v3+ for Medical Image Segmentation,” in PRIME, 2022.
  13. “Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images,” arXiv preprint arXiv:2201.01266, 2022.
  14. “nnFormer: Interleaved Transformer for Volumetric Segmentation,” arXiv preprint arXiv:2109.03201, 2021.
  15. “UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation,” arXiv:2212.04497, 2022.
  16. “Segment Anything,” arXiv:2304.02643, 2023.
  17. “Customized Segment Anything Model for Medical Image Segmentation,” arXiv preprint arXiv:2304.13785, 2023.
  18. “Segment anything in medical images,” arXiv preprint arXiv:2304.12306, 2023.
  19. “Deep Residual Learning for Image Recognition,” in CVPR, 2016.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Nhat-Tan Bui (9 papers)
  2. Dinh-Hieu Hoang (6 papers)
  3. Minh-Triet Tran (70 papers)
  4. Donald Adjeroh (12 papers)
  5. Brijesh Patel (2 papers)
  6. Arabinda Choudhary (1 paper)
  7. Ngan Le (84 papers)
  8. Gianfranco Doretto (30 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.