Task-Specific Skill Localization in Fine-tuned LLMs
Overview
The paper "Task-Specific Skill Localization in Fine-tuned LLMs" addresses the phenomenon of skill acquisition in pre-trained LLMs following fine-tuning for NLP tasks. The focus is on identifying the specific subset of parameters responsible for performing well in a fine-tuned model, termed as skill localization
. The approach proposed is intriguing, involving an optimization method that identifies these critical parameters, the so-called 'grafting region', which constitutes about 0.01% of the entire model's parameters yet accounts for most of the model's task performance.
Methodology
The core innovation lies in the concept of model grafting, where skill localization is achieved without additional re-training. The grafting involves copying the values of identified sparse parameters from the fine-tuned model to the pre-trained model, essentially localizing the skill. This results in nearly equivalent performance with significant improvements observed in calibration errors (40%-90% reduction) and out-of-distribution predictions without the susceptibility to catastrophic forgetting. The implications extend to multi-task and continual learning, where disjoint parameter subsets emerge for different tasks, indicating a potential method to transfer skills across related tasks.
Numerical Results
Key quantitative results include the findings that sparse graft regions consisting of only 0.01% of model parameters retain over 95% efficiency compared to fully fine-tuned models, with noticeable calibration and generalization benefits. For models optimized across multiple tasks, these regions show minimal overlap, suggesting a clear demarcation of task-specific parameters, a revelation suggesting compositional skill capabilities of models.
Implications and Future Directions
The implications of this research are profound in parameter-efficient model deployment, potentially reducing the computational and storage overhead required for fine-tuning. The observed calibration improvements may have practical applications in safety-critical NLP tasks, where confidence calibration is crucial. This work opens up new avenues in transferring learned capabilities through minimal parameter grafting, offering significant insights into model interpretability and explainability. The identified disjoint task-centric regions provide a new lens to explore modular task learning and potentially robust continual learning frameworks without forgetting learned skills. Future research could explore deeper the underlying mechanisms of skill localization and extend this method to other model architectures and domains beyond NLP to offer generalized solutions for efficient model training and deployment.