Introduction
In an environment where the LLMs are central to breakthroughs in various NLP tasks, the fine-tuning of pre-trained LLMs towards downstream tasks is a widely practiced approach. The intent behind this is to leverage the broad base knowledge of LLMs and specialize their capabilities for specific tasks. However, fine-tuning, while beneficial in enhancing task-specific performances, has an undesirable companion in the form of forgetting - a significant decline in the performance on originally learned tasks and knowledge.
Forgetting Phenomenon in LLMs
Recent examinations into the phenomenon of forgetting, especially when employing parameter-efficient fine-tuning (PEFT) strategies such as Low-Rank Adapters (LoRA), have revealed that such strategies are not immune to this concern. Notably, there is a discerned inverse linear relationship between the fine-tuning performance on downstream tasks and the amount of forgetting observed. This suggests that simply choosing to fine-tune fewer parameters or halting training prematurely does not alleviate forgetting, which brings to light the necessity of innovative approaches toward fine-tuning that can mitigate this effect without compromising performance.
Quantifying Forgetting and Scaling Laws
The paper presents an in-depth paper on quantifying forgetting in relation to various factors – the number of parameters tuned, the quantity of update steps, and the actual fine-tuning loss – and discovers that the degree of forgetting can be described by a shifted power law pattern. Through meticulous analyses, it introduces a new metric for forgetting based on the cross-entropy between the pre-trained and fine-tuned model, advocating for its utility over traditional benchmarks that may not accurately capture the shift in a model’s knowledge post-fine-tuning.
Empirical Observations and Implications
Empirical investigations highlighted the deleterious effects forgetting has on both the reasoning aptitude and safety protocols encoded within a model. Models fine-tuned on data that required learning of new information or enhanced instruction-following capabilities showed pronounced forgetting of previously acquired knowledge or safety behaviors, underscoring that effective fine-tuning should be considerate of both new task performance and the preservation of baseline knowledge.
Future Directions
The findings suggest that the solution to forgetting is not as straightforward as moderating the number of fine-tuned parameters or optimizing training duration. Future research is compelled to explore fine-tuning methods that strategically retain pre-trained capabilities while accommodating new learning. Addressing the shortcomings highlighted by the established scaling laws will be instrumental in developing LLMs that are not only task-specific but also retain their foundational knowledge and comprehension across the spectrum of their initial training.
In essence, the paper articulates that despite the efficacy of LoRA and other PEFT methods in preserving compute resources and yielding comparable fine-tuning outcomes to full model tuning, they do not provide an escape from the clutches of forgetting. This necessitates a conditional reevaluation of fine-tuning practices and a proactive quest for innovative solutions that can balance the act of task-specific learning and memory retention in LLMs.