Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments (2402.17972v1)

Published 28 Feb 2024 in cs.CV

Abstract: Purpose: Accurate tool segmentation is essential in computer-aided procedures. However, this task conveys challenges due to artifacts' presence and the limited training data in medical scenarios. Methods that generalize to unseen data represent an interesting venue, where zero-shot segmentation presents an option to account for data limitation. Initial exploratory works with the Segment Anything Model (SAM) show that bounding-box-based prompting presents notable zero-short generalization. However, point-based prompting leads to a degraded performance that further deteriorates under image corruption. We argue that SAM drastically over-segment images with high corruption levels, resulting in degraded performance when only a single segmentation mask is considered, while the combination of the masks overlapping the object of interest generates an accurate prediction. Method: We use SAM to generate the over-segmented prediction of endoscopic frames. Then, we employ the ground-truth tool mask to analyze the results of SAM when the best single mask is selected as prediction and when all the individual masks overlapping the object of interest are combined to obtain the final predicted mask. We analyze the Endovis18 and Endovis17 instrument segmentation datasets using synthetic corruptions of various strengths and an In-House dataset featuring counterfactually created real-world corruptions. Results: Combining the over-segmented masks contributes to improvements in the IoU. Furthermore, selecting the best single segmentation presents a competitive IoU score for clean images. Conclusions: Combined SAM predictions present improved results and robustness up to a certain corruption level. However, appropriate prompting strategies are fundamental for implementing these models in the medical domain.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy. Scientific Reports (2020).
  2. 2018 Robotic Scene Segmentation Challenge. arXiv (Jan. 2020).
  3. 2017 Robotic Instrument Segmentation Challenge. arXiv (Feb. 2019).
  4. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 12 (Jan. 2017), 2481–2495.
  5. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ACL Anthology (June 2019), 4171–4186.
  6. CaRTS: Causality-Driven Robot Tool Segmentation from Vision and Kinematics Data. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. Springer, Cham, Switzerland, Sept. 2022, pp. 387–398.
  7. An open-source research kit for the da Vinci® Surgical System. In 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, May 2014, pp. 6434–6439.
  8. Segment Anything. arXiv (Apr. 2023).
  9. OpenAI. GPT-4 Technical Report. arXiv (Mar. 2023).
  10. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer, Cham, Switzerland, Nov. 2015, pp. 234–241.
  11. SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective. arXiv (Apr. 2023).
Citations (4)

Summary

We haven't generated a summary for this paper yet.