Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Role of Pre-training Data in Transfer Learning (2302.13602v2)

Published 27 Feb 2023 in cs.CV and cs.LG

Abstract: The transfer learning paradigm of model pre-training and subsequent fine-tuning produces high-accuracy models. While most studies recommend scaling the pre-training size to benefit most from transfer learning, a question remains: what data and method should be used for pre-training? We investigate the impact of pre-training data distribution on the few-shot and full fine-tuning performance using 3 pre-training methods (supervised, contrastive language-image and image-image), 7 pre-training datasets, and 9 downstream datasets. Through extensive controlled experiments, we find that the choice of the pre-training data source is essential for the few-shot transfer, but its role decreases as more data is made available for fine-tuning. Additionally, we explore the role of data curation and examine the trade-offs between label noise and the size of the pre-training dataset. We find that using 2000X more pre-training data from LAION can match the performance of supervised ImageNet pre-training. Furthermore, we investigate the effect of pre-training methods, comparing language-image contrastive vs. image-image contrastive, and find that the latter leads to better downstream accuracy

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rahim Entezari (11 papers)
  2. Mitchell Wortsman (29 papers)
  3. Olga Saukh (17 papers)
  4. M. Moein Shariatnia (1 paper)
  5. Hanie Sedghi (35 papers)
  6. Ludwig Schmidt (80 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.