Papers
Topics
Authors
Recent
2000 character limit reached

Diversity Over Scale: Whole-Slide Image Variety Enables H&E Foundation Model Training with Fewer Patches (2511.10286v1)

Published 13 Nov 2025 in q-bio.TO

Abstract: Rapid progress in computational pathology is increasingly driven by vision foundation models pretrained on vast histopathology datasets. While recent efforts have prioritized training on an ever-larger amount of patches, we take an alternative approach focused on data diversity. Our foundation model, Athena, was initialized from a pretrained model and trained on just 115 million tissue patches, several times fewer than recent histopathology foundation models. Rather than relying on patch volume or complex sampling heuristics, we maximize data diversity by randomly selecting only a moderate number of patches per whole-slide image from our diverse internal repository, which spans multiple countries, institutions, and scanner types. Evaluated on a single patch-level benchmark and four slide-level downstream tasks (two molecular and two morphological), Athena approaches the state-of-the-art and even surpasses several models trained on substantially larger datasets. This indicates that diversity across whole-slide images, rather than patch quantity alone, drives learning in histopathology foundation models.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube