Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Computational Pathology at Health System Scale -- Self-Supervised Foundation Models from Three Billion Images (2310.07033v1)

Published 10 Oct 2023 in cs.CV, cs.AI, cs.LG, and eess.IV

Abstract: Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in the medical domain, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has leveraged smaller datasets for both pre-training and evaluating downstream performance. The aim of this project is to train the largest academic foundation model and benchmark the most prominent self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinical pathology datasets. We collected the largest pathology dataset to date, consisting of over 3 billion images from over 423 thousand microscopy slides. We compared pre-training of visual transformer models using the masked autoencoder (MAE) and DINO algorithms. We evaluated performance on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction. Our results demonstrate that pre-training on pathology data is beneficial for downstream performance compared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented results signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Gabriele Campanella (11 papers)
  2. Ricky Kwan (2 papers)
  3. Eugene Fluder (3 papers)
  4. Jennifer Zeng (4 papers)
  5. Aryeh Stock (3 papers)
  6. Brandon Veremis (2 papers)
  7. Alexandros D. Polydorides (2 papers)
  8. Cyrus Hedvat (1 paper)
  9. Adam Schoenfeld (1 paper)
  10. Chad Vanderbilt (7 papers)
  11. Patricia Kovatch (2 papers)
  12. Carlos Cordon-Cardo (1 paper)
  13. Thomas J. Fuchs (24 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.