Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Atlas: A Novel Pathology Foundation Model by Mayo Clinic, Charité, and Aignostics (2501.05409v2)

Published 9 Jan 2025 in cs.CV, cs.AI, and cs.LG

Abstract: Recent advances in digital pathology have demonstrated the effectiveness of foundation models across diverse applications. In this report, we present Atlas, a novel vision foundation model based on the RudolfV approach. Our model was trained on a dataset comprising 1.2 million histopathology whole slide images, collected from two medical institutions: Mayo Clinic and Charit\'e - Universt\"atsmedizin Berlin. Comprehensive evaluations show that Atlas achieves state-of-the-art performance across twenty-one public benchmark datasets, even though it is neither the largest model by parameter count nor by training dataset size.

Summary

  • The paper presents a ViT-based foundation model for histopathology that achieves a 61.9% average score across 21 benchmarks.
  • The paper employs a self-supervised ViT-H/14 architecture trained on 1.2M whole slide images from over 490,000 clinical cases.
  • The paper highlights potential improvements by scaling data and model size and calls for standardized benchmarks in pathology.

A Novel Pathology Foundation Model by Mayo Clinic, Charité, and Aignostics: Expert Review

The paper by Alber et al. presents a novel Vision Transformer (ViT) based foundation model tailored for histopathological data, developed collaboratively by Mayo Clinic, Charité, and Aignostics. This model, hereafter referred to as the Mayo-Charité-Aignostics (MCA) model, employs an innovative adaptation of the RudolfV paradigm—emphasizing robust feature extraction across a diverse array of histopathology data. The model demonstrates significant advancements in generalization and accuracy for various downstream tasks when compared to existing counterparts.

Data and Methodology

The MCA model is trained on an extensive dataset comprising 1.2 million whole slide images (WSIs) sourced from over 490,000 clinical cases, collected from Mayo Clinic and Charité - Universitätsmedizin Berlin. This dataset encompasses a wide range of tissue types, staining methods, and image resolutions, offering a rich training ground for the model's self-supervised learning framework. The training pipeline utilizes a ViT-H/14 architecture, which is known for its efficiency in handling high-dimensional image data without the large compute overhead typically required for models of comparable performance.

The model was trained using a dataset of approximately 520 million image tiles, employing various magnifications to ensure the model's capacity to generalize across scales—a crucial aspect in histopathology that deals with cellular details and macro-structural contexts. High-performance computational instances, specifically Nvidia H100 GPUs, facilitated the training, underscoring the computational intensity inherent in handling such massive datasets.

Performance Evaluation

The evaluation protocol implemented for the MCA model was rigorous and comprehensive, leveraging both public benchmark datasets and standard evaluation frameworks, like eva and HEST. The model was tested across 21 benchmarks, which assessed its proficiency in tasks ranging from cancer subtype classification to prediction of molecular biomarkers. Notably, the MCA model achieved an overall average score of 61.9%, outperforming other leading models such as Virchow2 and H-Optimus-0 on several tasks.

The MCA model excelled particularly in morphology-related tasks, achieving the highest accuracy in 6 out of the 9 tasks within this category. This performance is indicative of the model's enhanced capacity to capture morphological variances, which are often indicative of pathological conditions. Furthermore, the model displayed strong performance parity in molecular-related tasks, tying for top results on 5 out of 12 tasks.

Discussion and Implications

The enhanced performance of the MCA model, particularly in tasks with significant morphological components, aligns with contemporary trends that emphasize the importance of multi-scale feature representation in pathology. The model's success validates the hypothesis that diverse and large-scale datasets, alongside tailored model architectures, can significantly boost the accuracy and robustness of pathology applications.

Despite the impressive results, the paper suggests potential gains by scaling both data and model size further, closely aligning with findings from related work suggesting that larger datasets and more parameters could further enhance model performance. This positions the MCA model as a promising framework for future developments in AI-driven pathology diagnostics, with potential implications across both clinical and research domains.

The paper also highlights the exigency for a standardized and wide-reaching benchmark framework for evaluating pathology foundation models—a sentiment echoed by the authors. Such frameworks would facilitate more granular insights into model performance across varied pathological conditions, thereby advancing the field substantially.

In conclusion, the MCA model represents a crucial step forward in the domain of computational pathology, offering a robust and versatile tool for stakeholders in medical diagnostics. Future work should focus on further scaling ventures and broadening the diversity of training datasets to unlock even more capabilities of such foundation models.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets