Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tissue Cross-Section and Pen Marking Segmentation in Whole Slide Images (2401.13511v1)

Published 24 Jan 2024 in eess.IV, cs.CV, and cs.LG

Abstract: Tissue segmentation is a routine preprocessing step to reduce the computational cost of whole slide image (WSI) analysis by excluding background regions. Traditional image processing techniques are commonly used for tissue segmentation, but often require manual adjustments to parameter values for atypical cases, fail to exclude all slide and scanning artifacts from the background, and are unable to segment adipose tissue. Pen marking artifacts in particular can be a potential source of bias for subsequent analyses if not removed. In addition, several applications require the separation of individual cross-sections, which can be challenging due to tissue fragmentation and adjacent positioning. To address these problems, we develop a convolutional neural network for tissue and pen marking segmentation using a dataset of 200 H&E stained WSIs. For separating tissue cross-sections, we propose a novel post-processing method based on clustering predicted centroid locations of the cross-sections in a 2D histogram. On an independent test set, the model achieved a mean Dice score of 0.981$\pm$0.033 for tissue segmentation and a mean Dice score of 0.912$\pm$0.090 for pen marking segmentation. The mean absolute difference between the number of annotated and separated cross-sections was 0.075$\pm$0.350. Our results demonstrate that the proposed model can accurately segment H&E stained tissue cross-sections and pen markings in WSIs while being robust to many common slide and scanning artifacts. The model with trained model parameters and post-processing method are made publicly available as a Python package called SlideSegmenter.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Lu, M. Y., Williamson, D. F. K., Chen, T. Y., Chen, R. J., Barbieri, M., and Mahmood, F., “Data-efficient and weakly supervised computational pathology on whole-slide images,” Nature Biomedical Engineering 5(6), 555–570 (2021).
  2. Pocock, J., Graham, S., Vu, Q. D., Jahanifar, M., Deshpande, S., Hadjigeorghiou, G., Shephard, A., Bashir, R. M. S., Bilal, M., Lu, W., Epstein, D., Minhas, F., Rajpoot, N. M., and Raza, S. E. A., “TIAToolbox as an end-to-end library for advanced tissue image analytics,” Communications medicine 2(1), 120 (2022).
  3. Chen, R. J., Ding, T., Lu, M. Y., Williamson, D. F. K., Jaume, G., Chen, B., Zhang, A., Shao, D., Song, A. H., Shaban, M., Williams, M., Vaidya, A., Sahai, S., Oldenburg, L., Weishaupt, L. L., Wang, J. J., Williams, W., Le, L. P., Gerber, G., and Mahmood, F., “A general-purpose self-supervised model for computational pathology,” arXiv preprint arXiv:2308.15474 (2023).
  4. Vorontsov, E., Bozkurt, A., Casson, A., Shaikovski, G., Zelechowski, M., Liu, S., Mathieu, P., van Eck, A., Lee, D., Viret, J., Robert, E., Wang, Y. K., Kunz, J. D., Lee, M. C. H., Bernhard, J., Godrich, R. A., Oakley, G., Millar, E., Hanna, M., Retamero, J., Moye, W. A., Yousfi, R., Kanan, C., Klimstra, D., Rothrock, B., and Fuchs, T. J., “Virchow: A million-slide digital pathology foundation model,” arXiv preprint arXiv:2309.07778 (2023).
  5. van Bergeijk, S. A., Stathonikos, N., ter Hoeve, N. D., Lafarge, M. W., Nguyen, T. Q., van Diest, P. J., and Veta, M., “Deep learning supported mitoses counting on whole slide images: A pilot study for validating breast cancer grading in the clinical workflow,” Journal of Pathology Informatics 14, 100316 (2023).
  6. Winkler, J. K., Fink, C., Toberer, F., Enk, A., Deinlein, T., Hofmann-Wellenhof, R., Thomas, L., Lallas, A., Blum, A., Stolz, W., and Haenssle, H. A., “Association between surgical skin markings in dermoscopic images and diagnostic performance of a deep learning convolutional neural network for melanoma recognition,” JAMA dermatology 155(10), 1135–1141 (2019).
  7. Graham, S., Vu, Q. D., Raza, S. E. A., Azam, A., Tsang, Y. W., Kwak, J. T., and Rajpoot, N. M., “HoVer-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images,” Medical Image Analysis 58, 101563 (2019).
  8. Ronneberger, O., Fischer, P., and Brox, T., “U-Net: Convolutional networks for biomedical image segmentation,” in [Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI) ], 234–241, Springer (2015).
  9. Loshchilov, I. and Hutter, F., “Decoupled weight decay regularization,” Proc. Int. Conf. Learn. Representations (ICLR) (2019).
  10. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S., “Pytorch: An imperative style, high-performance deep learning library,” Proc. Annu. Conf. Neural Inf. Process. Syst. (NeurIPS) 32 (2019).
  11. Janowczyk, A., Zuo, R., Gilmore, H., Feldman, M., and Madabhushi, A., “HistoQC: an open-source quality control tool for digital pathology slides,” JCO clinical cancer informatics 3, 1–7 (2019).
  12. Smit, G., Ciompi, F., Cigéhn, M., Bodén, A., van der Laak, J., and Mercan, C., “Quality control of whole-slide images through multi-class semantic segmentation of artifacts,” in [Proc. Int. Conf. Med. Imag. Deep Learn. (MIDL) ], (2021).
Citations (1)

Summary

We haven't generated a summary for this paper yet.