Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging (2304.04155v1)

Published 9 Apr 2023 in eess.IV and cs.CV

Abstract: The segment anything model (SAM) was released as a foundation model for image segmentation. The promptable segmentation model was trained by over 1 billion masks on 11M licensed and privacy-respecting images. The model supports zero-shot image segmentation with various segmentation prompts (e.g., points, boxes, masks). It makes the SAM attractive for medical image analysis, especially for digital pathology where the training data are rare. In this study, we evaluate the zero-shot segmentation performance of SAM model on representative segmentation tasks on whole slide imaging (WSI), including (1) tumor segmentation, (2) non-tumor tissue segmentation, (3) cell nuclei segmentation. Core Results: The results suggest that the zero-shot SAM model achieves remarkable segmentation performance for large connected objects. However, it does not consistently achieve satisfying performance for dense instance object segmentation, even with 20 prompts (clicks/boxes) on each image. We also summarized the identified limitations for digital pathology: (1) image resolution, (2) multiple scales, (3) prompt selection, and (4) model fine-tuning. In the future, the few-shot fine-tuning with images from downstream pathological segmentation tasks might help the model to achieve better performance in dense object segmentation.

Segment Anything Model (SAM) for Digital Pathology: Zero-shot Segmentation on Whole Slide Imaging

The paper under review presents a paper assessing the performance of the Segment Anything Model (SAM) applied to digital pathology tasks involving whole slide imaging (WSI). SAM has been introduced as a foundational image segmentation model, trained on over a billion masks on 11 million images, providing a substantial basis for its zero-shot segmentation capabilities. The model's ability to perform image segmentation without pre-training on specific domain data presents significant implications for digital pathology, a field in which obtaining annotated training data is challenging due to the intensive manual efforts, privacy concerns, and intricacy of the annotation processes.

Zero-shot Segmentation Assessment

This paper systematically evaluates SAM’s segmentation performance across three representative tasks: tumor segmentation, tissue segmentation, and cell nuclei segmentation. The results demonstrate that SAM achieves commendable performance in segmenting large connected regions, such as tumors, especially when multiple prompt points are used. For instance, using 20 prompt points, SAM achieved a Dice score of 74.98, surpassing single-point prompting scores and coming closer to the state-of-the-art (SOTA) reference. However, SAM's performance is inconsistent for dense object segmentation, even with numerous prompts per image, where traditional SOTA models still outperform SAM in tasks requiring high precision, such as nuclei segmentation with Dice scores of 81.77 compared to SAM’s 41.65 with 20 point prompts.

Limitations Identified in SAM Application

The paper highlights several limitations in the application of SAM to digital pathology:

  • Image Resolution: SAM operates at a resolution significantly lower than the gigapixel scale of WSI data, leading to computational challenges and limiting practical usability in high-resolution scenarios.
  • Multiple Scales: Different tissue types require specific resolution scales to achieve optimal segmentation. SAM's performance varies across scales, making it less effective for tasks requiring multi-scale analysis.
  • Prompt Selection: The segmentation performance depends heavily on the strategic selection of segmentation prompts. The model's reliance on high-quality prompts underscores a lack of robustness, especially in zero-shot conditions.
  • Model Fine-Tuning: While SAM offers zero-shot capabilities, exploration into few-shot fine-tuning strategies could better align its performance with domain-specific needs, reducing manual effort and improving segmentation accuracy for dense and heterogeneous objects.

Implications and Future Directions

The application of SAM in digital pathology reveals promising results for large object segmentation under zero-shot conditions, validating its potential utility in medical imaging where training data scarcity is a significant hurdle. However, the need for improved fine-tuning strategies presents a fertile area for further research, potentially enhancing SAM's fidelity in dense instance segmentation. Incorporating few-shot learning techniques could mitigate the reliance on dense prompting and expand SAM's efficacy across a broader spectrum of medical imaging tasks. These developments could drive advancements both in theoretical AI model application and in practical digital pathology workflows.

The integration of SAM with online/offline fine-tuning methodologies represents a future trajectory that could streamline digital pathology processes, widening the scope of SAM's utility in clinical and research settings. This trajectory might also influence subsequent developments in AI-assisted image analysis beyond pathology, paving the way for robust foundation models that cater to specialized domains with limited pretrained data scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Ruining Deng (67 papers)
  2. Can Cui (96 papers)
  3. Quan Liu (116 papers)
  4. Tianyuan Yao (39 papers)
  5. Lucas W. Remedios (21 papers)
  6. Shunxing Bao (67 papers)
  7. Bennett A. Landman (123 papers)
  8. Lee E. Wheless (4 papers)
  9. Lori A. Coburn (10 papers)
  10. Keith T. Wilson (9 papers)
  11. Yaohong Wang (15 papers)
  12. Shilin Zhao (20 papers)
  13. Agnes B. Fogo (17 papers)
  14. Haichun Yang (47 papers)
  15. Yucheng Tang (67 papers)
  16. Yuankai Huo (161 papers)
Citations (170)