From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments (2402.17972v1)
Abstract: Purpose: Accurate tool segmentation is essential in computer-aided procedures. However, this task conveys challenges due to artifacts' presence and the limited training data in medical scenarios. Methods that generalize to unseen data represent an interesting venue, where zero-shot segmentation presents an option to account for data limitation. Initial exploratory works with the Segment Anything Model (SAM) show that bounding-box-based prompting presents notable zero-short generalization. However, point-based prompting leads to a degraded performance that further deteriorates under image corruption. We argue that SAM drastically over-segment images with high corruption levels, resulting in degraded performance when only a single segmentation mask is considered, while the combination of the masks overlapping the object of interest generates an accurate prediction. Method: We use SAM to generate the over-segmented prediction of endoscopic frames. Then, we employ the ground-truth tool mask to analyze the results of SAM when the best single mask is selected as prediction and when all the individual masks overlapping the object of interest are combined to obtain the final predicted mask. We analyze the Endovis18 and Endovis17 instrument segmentation datasets using synthetic corruptions of various strengths and an In-House dataset featuring counterfactually created real-world corruptions. Results: Combining the over-segmented masks contributes to improvements in the IoU. Furthermore, selecting the best single segmentation presents a competitive IoU score for clean images. Conclusions: Combined SAM predictions present improved results and robustness up to a certain corruption level. However, appropriate prompting strategies are fundamental for implementing these models in the medical domain.
- An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy. Scientific Reports (2020).
- 2018 Robotic Scene Segmentation Challenge. arXiv (Jan. 2020).
- 2017 Robotic Instrument Segmentation Challenge. arXiv (Feb. 2019).
- SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 12 (Jan. 2017), 2481–2495.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ACL Anthology (June 2019), 4171–4186.
- CaRTS: Causality-Driven Robot Tool Segmentation from Vision and Kinematics Data. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. Springer, Cham, Switzerland, Sept. 2022, pp. 387–398.
- An open-source research kit for the da Vinci® Surgical System. In 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, May 2014, pp. 6434–6439.
- Segment Anything. arXiv (Apr. 2023).
- OpenAI. GPT-4 Technical Report. arXiv (Mar. 2023).
- U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer, Cham, Switzerland, Nov. 2015, pp. 234–241.
- SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective. arXiv (Apr. 2023).