Computational Pathology at Health System Scale -- Self-Supervised Foundation Models from Three Billion Images (2310.07033v1)
Abstract: Recent breakthroughs in self-supervised learning have enabled the use of large unlabeled datasets to train visual foundation models that can generalize to a variety of downstream tasks. While this training paradigm is well suited for the medical domain where annotations are scarce, large-scale pre-training in the medical domain, and in particular pathology, has not been extensively studied. Previous work in self-supervised learning in pathology has leveraged smaller datasets for both pre-training and evaluating downstream performance. The aim of this project is to train the largest academic foundation model and benchmark the most prominent self-supervised learning algorithms by pre-training and evaluating downstream performance on large clinical pathology datasets. We collected the largest pathology dataset to date, consisting of over 3 billion images from over 423 thousand microscopy slides. We compared pre-training of visual transformer models using the masked autoencoder (MAE) and DINO algorithms. We evaluated performance on six clinically relevant tasks from three anatomic sites and two institutions: breast cancer detection, inflammatory bowel disease detection, breast cancer estrogen receptor prediction, lung adenocarcinoma EGFR mutation prediction, and lung cancer immunotherapy response prediction. Our results demonstrate that pre-training on pathology data is beneficial for downstream performance compared to pre-training on natural images. Additionally, the DINO algorithm achieved better generalization performance across all tasks tested. The presented results signify a phase change in computational pathology research, paving the way into a new era of more performant models based on large-scale, parallel pre-training at the billion-image scale.
- Gabriele Campanella (11 papers)
- Ricky Kwan (2 papers)
- Eugene Fluder (3 papers)
- Jennifer Zeng (4 papers)
- Aryeh Stock (3 papers)
- Brandon Veremis (2 papers)
- Alexandros D. Polydorides (2 papers)
- Cyrus Hedvat (1 paper)
- Adam Schoenfeld (1 paper)
- Chad Vanderbilt (7 papers)
- Patricia Kovatch (2 papers)
- Carlos Cordon-Cardo (1 paper)
- Thomas J. Fuchs (24 papers)