Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When does Bias Transfer in Transfer Learning? (2207.02842v1)

Published 6 Jul 2022 in cs.LG

Abstract: Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside. In this work, we demonstrate that there can exist a downside after all: bias transfer, or the tendency for biases of the source model to persist even after adapting the model to the target class. Through a combination of synthetic and natural experiments, we show that bias transfer both (a) arises in realistic settings (such as when pre-training on ImageNet or other standard datasets) and (b) can occur even when the target dataset is explicitly de-biased. As transfer-learned models are increasingly deployed in the real world, our work highlights the importance of understanding the limitations of pre-trained source models. Code is available at https://github.com/MadryLab/bias-transfer

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hadi Salman (27 papers)
  2. Saachi Jain (14 papers)
  3. Andrew Ilyas (39 papers)
  4. Logan Engstrom (27 papers)
  5. Eric Wong (47 papers)
  6. Aleksander Madry (86 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.