Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Data-Based Perspective on Transfer Learning (2207.05739v1)

Published 12 Jul 2022 in cs.LG

Abstract: It is commonly believed that in transfer learning including more pre-training data translates into better performance. However, recent evidence suggests that removing data from the source dataset can actually help too. In this work, we take a closer look at the role of the source dataset's composition in transfer learning and present a framework for probing its impact on downstream performance. Our framework gives rise to new capabilities such as pinpointing transfer learning brittleness as well as detecting pathologies such as data-leakage and the presence of misleading examples in the source dataset. In particular, we demonstrate that removing detrimental datapoints identified by our framework improves transfer learning performance from ImageNet on a variety of target tasks. Code is available at https://github.com/MadryLab/data-transfer

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Saachi Jain (14 papers)
  2. Hadi Salman (27 papers)
  3. Alaa Khaddaj (6 papers)
  4. Eric Wong (47 papers)
  5. Sung Min Park (10 papers)
  6. Aleksander Madry (86 papers)
Citations (30)
Github Logo Streamline Icon: https://streamlinehq.com