Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Why does my medical AI look at pictures of birds? Exploring the efficacy of transfer learning across domain boundaries (2306.17555v1)

Published 30 Jun 2023 in cs.CV

Abstract: It is an open secret that ImageNet is treated as the panacea of pretraining. Particularly in medical machine learning, models not trained from scratch are often finetuned based on ImageNet-pretrained models. We posit that pretraining on data from the domain of the downstream task should almost always be preferred instead. We leverage RadNet-12M, a dataset containing more than 12 million computed tomography (CT) image slices, to explore the efficacy of self-supervised pretraining on medical and natural images. Our experiments cover intra- and cross-domain transfer scenarios, varying data scales, finetuning vs. linear evaluation, and feature space analysis. We observe that intra-domain transfer compares favorably to cross-domain transfer, achieving comparable or improved performance (0.44% - 2.07% performance increase using RadNet pretraining, depending on the experiment) and demonstrate the existence of a domain boundary-related generalization gap and domain-specific learned features.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Frederic Jonske (5 papers)
  2. Moon Kim (16 papers)
  3. Enrico Nasca (4 papers)
  4. Janis Evers (1 paper)
  5. Johannes Haubold (9 papers)
  6. René Hosch (2 papers)
  7. Felix Nensa (11 papers)
  8. Michael Kamp (24 papers)
  9. Constantin Seibold (28 papers)
  10. Jan Egger (94 papers)
  11. Jens Kleesiek (80 papers)