Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-tuning can cripple your foundation model; preserving features may be the solution (2308.13320v3)

Published 25 Aug 2023 in cs.LG and cs.CV

Abstract: Pre-trained foundation models, due to their enormous capacity and exposure to vast amounts of data during pre-training, are known to have learned plenty of real-world concepts. An important step in making these pre-trained models effective on downstream tasks is to fine-tune them on related datasets. While various fine-tuning methods have been devised and have been shown to be highly effective, we observe that a fine-tuned model's ability to recognize concepts on tasks $\textit{different}$ from the downstream one is reduced significantly compared to its pre-trained counterpart. This is an undesirable effect of fine-tuning as a substantial amount of resources was used to learn these pre-trained concepts in the first place. We call this phenomenon ''concept forgetting'' and via experiments show that most end-to-end fine-tuning approaches suffer heavily from this side effect. To this end, we propose a simple fix to this problem by designing a new fine-tuning method called $\textit{LDIFS}$ (short for $\ell_2$ distance in feature space) that, while learning new concepts related to the downstream task, allows a model to preserve its pre-trained knowledge as well. Through extensive experiments on 10 fine-tuning tasks we show that $\textit{LDIFS}$ significantly reduces concept forgetting. Additionally, we show that LDIFS is highly effective in performing continual fine-tuning on a sequence of tasks as well, in comparison with both fine-tuning as well as continual learning baselines.

Analyzing Concept Forgetting in Fine-tuning Foundation Models

The paper, "Fine-tuning can cripple your foundation model; preserving features may be the solution," addresses a critical issue in the fine-tuning of pre-trained foundation models, commonly referred to as "concept forgetting." This phenomenon occurs when a model, despite achieving excellent performance on a downstream task after fine-tuning, loses its ability to recognize concepts from its pre-training dataset. This is a significant drawback, given the extensive resources allocated to pre-training these models on vast datasets.

Summary of Findings

Concept Forgetting During Fine-tuning

The authors observe that most end-to-end fine-tuning approaches, such as ZS-init-CE, LP-init-CE, and others, result in the model losing knowledge about real-world concepts not covered in the fine-tuning dataset. This is quantified using the difference in linear probe accuracy (ΔLP) between the pre-trained and fine-tuned models on various tasks. A consistent pattern emerges where fine-tuning on a narrow set of concepts reduces the model’s performance on a broader array of tasks, confirming the existence of concept forgetting.

Analysis of Fine-tuning Methods

Among the fine-tuning methods examined, L2SP stands out for its ability to reduce concept forgetting by regularizing the model to remain close to its original parameters in the parameter space. This inspired the authors to propose the LDIFS (ℓ₂ distance in feature space) regularizer, which instead maintains the model's proximity to its pre-trained feature space, thus preserving its input-output behavior. Analysis shows that LDIFS reduces concept forgetting more effectively than parameter-space regularizers like L2SP.

Experimental Validation

The authors demonstrate the efficacy of LDIFS through experiments on ten fine-tuning tasks, revealing substantially lower concept forgetting compared to alternative methods. Additionally, LDIFS offers strong performance on the fine-tuned tasks themselves, maintaining competitiveness with existing fine-tuning techniques in terms of downstream accuracy. The experimental results underscore LDIFS’s efficacy in both individual task fine-tuning and continual fine-tuning scenarios, where it outperforms classic continual learning techniques.

Implications and Future Prospects

The implications of this research are manifold. Practically, LDIFS offers a robust solution for deploying foundation models in scenarios requiring both task specialization and broad generalization capabilities. Theoretically, it advances understanding of the impact of feature space preservation on model robustness.

The exploration points to several natural extensions for future work. Firstly, applying the insights from LDIFS to other model families, such as LLMs, could reveal universal principles governing the trade-off between fine-tuning and knowledge preservation. Secondly, understanding the granularity of concepts in foundation models and developing more refined measures of concept forgetting and retention would enhance model evaluation metrics. Lastly, further optimizing the feature space distance measure within LDIFS could yield variants that cater to specific tasks or domains, facilitating tailored solutions across a wide range of AI applications.

In conclusion, this paper provides a substantial contribution to the discourse on fine-tuning foundation models by identifying and addressing the issue of concept forgetting. The proposed LDIFS method presents a streamlined approach to mitigating this effect, suggesting a promising avenue for future research in maintaining model generality post-fine-tuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jishnu Mukhoti (10 papers)
  2. Yarin Gal (170 papers)
  3. Philip H. S. Torr (219 papers)
  4. Puneet K. Dokania (44 papers)
Citations (16)