SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM (2304.05622v4)
Abstract: The Segment Anything Model (SAM) is a new image segmentation tool trained with the largest available segmentation dataset. The model has demonstrated that, with prompts, it can create high-quality masks for general images. However, the performance of the model on medical images requires further validation. To assist with the development, assessment, and application of SAM on medical images, we introduce Segment Any Medical Model (SAMM), an extension of SAM on 3D Slicer - an image processing and visualization software extensively used by the medical imaging community. This open-source extension to 3D Slicer and its demonstrations are posted on GitHub (https://github.com/bingogome/samm). SAMM achieves 0.6-second latency of a complete cycle and can infer image masks in nearly real-time.
- Segment anything. arXiv preprint arXiv:2304.02643, 2023.
- Multi-scale self-guided attention for medical image segmentation. IEEE Journal of Biomedical and Health Informatics, 25(1):121–130, 2021. doi:10.1109/JBHI.2020.2986926.
- A deep learning based medical image segmentation technique in internet-of-medical-things domain. Future Generation Computer Systems, 108:135–144, 2020. ISSN 0167-739X. doi:10.1016/j.future.2020.02.054.
- Synthetic data accelerates the development of generalizable learning-based algorithms for X-ray image analysis. Nat Mach Intell, 5(3):294–308, March 2023. ISSN 2522-5839. doi:10.1038/s42256-023-00629-1.
- 3d slicer as an image computing platform for the quantitative imaging network. volume 30, pages 1323–1341, 2012. doi:10.1016/j.mri.2012.05.001.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Pieter Hintjens. ZeroMQ: Messaging for Many Applications. O’Reilly Media, Inc., 2013.
- Array programming with NumPy. Nature, 585(7825):357–362, September 2020. doi:10.1038/s41586-020-2649-2.
- Monai: An open-source framework for deep learning in healthcare. arXiv preprint arXiv:2211.02701, 2022.
- Yihao Liu (85 papers)
- Jiaming Zhang (117 papers)
- Zhangcong She (2 papers)
- Amir Kheradmand (7 papers)
- Mehran Armand (51 papers)