Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models (2203.06904v2)

Published 14 Mar 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Despite the success, the process of fine-tuning large-scale PLMs brings prohibitive adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining separate instances for different tasks are practically infeasible. This necessitates a new branch of research focusing on the parameter-efficient adaptation of PLMs, dubbed as delta tuning in this paper. In contrast with the standard fine-tuning, delta tuning only fine-tunes a small portion of the model parameters while keeping the rest untouched, largely reducing both the computation and storage costs. Recent studies have demonstrated that a series of delta tuning methods with distinct tuned parameter selection could achieve performance on a par with full-parameter fine-tuning, suggesting a new promising way of stimulating large-scale PLMs. In this paper, we first formally describe the problem of delta tuning and then comprehensively review recent delta tuning approaches. We also propose a unified categorization criterion that divide existing delta tuning methods into three groups: addition-based, specification-based, and reparameterization-based methods. Though initially proposed as an efficient method to steer large models, we believe that some of the fascinating evidence discovered along with delta tuning could help further reveal the mechanisms of PLMs and even deep neural networks. To this end, we discuss the theoretical principles underlying the effectiveness of delta tuning and propose frameworks to interpret delta tuning from the perspective of optimization and optimal control, respectively. Furthermore, we provide a holistic empirical study of representative methods, where results on over 100 NLP tasks demonstrate a comprehensive performance comparison of different approaches. The experimental results also cover the analysis of combinatorial, scaling and transferable properties of delta tuning.

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained LLMs

The paper presents a comprehensive paper on the concept of "delta tuning," a term introduced as a parameter-efficient method for adapting large pre-trained LLMs (PLMs). With the ever-increasing scale of PLMs, the full fine-tuning process becomes computationally prohibitive and storage-demanding. Delta tuning addresses this by modifying only a small subset of model parameters, significantly reducing computational and storage costs while achieving performance comparable to full fine-tuning.

Key Contributions

  1. Definition and Categorization: Delta tuning is defined as a method where only a minimal set of parameters is tuned, contrasting with traditional fine-tuning where all parameters are updated. The paper categorizes existing delta tuning methods into three groups:
    • Addition-based Methods: These involve adding new parameters, such as adapter modules, to the existing model architecture.
    • Specification-based Methods: These selectively update specific existing parameters, often based on heuristics or learned criteria.
    • Reparameterization-based Methods: These transform the parameter space to a lower-dimensional representation, motivated by hypotheses about the low-rank or low-dimensional nature of adaptation processes.
  2. Theoretical Frameworks: The paper explores delta tuning from both optimization and optimal control perspectives:
    • Optimization Perspective: The authors discuss how delta tuning can be seen as subspace optimization or functional approximation within a neural network, leveraging the low intrinsic dimensionality of adaptation.
    • Optimal Control Perspective: It views delta tuning as an optimal control problem where the delta parameters act as controllers to steer the PLM towards desired outcomes.
  3. Empirical Study: Extensive experiments across 100+ NLP tasks reveal the practical effectiveness of delta tuning. Key findings include:
    • Comparable performance to full fine-tuning, especially when the scale of the PLM increases.
    • Enhanced convergence rates and performance when combining multiple delta tuning methods.
    • Noteworthy transferability of delta tuning methods across tasks, highlighting the potential for knowledge sharing through trained delta modules.
  4. Applications: Delta tuning is particularly valuable in scenarios requiring efficient computation and storage, such as:
    • Multi-task learning and the creation of shareable, task-specific checkpoints.
    • Mitigating catastrophic forgetting in lifelong learning settings.
    • Facilitating PLMs-as-a-service models, where multiple users can efficiently deploy and adapt models for various downstream tasks.

Implications and Future Directions

Delta tuning offers a promising approach to efficiently leverage the power of large PLMs, making them more accessible and deployable across different computational environments. As PLMs grow ever larger, methods like delta tuning will likely gain prominence in both academic research and practical industry applications. Future research may focus on further refining these methods, exploring additional theoretical frameworks, and broadening the range of applications in AI systems. This paper lays a foundation for ongoing innovations in the efficient deployment of PLMs, pointing towards a scalable approach to model adaptation in NLP and beyond.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (20)
  1. Ning Ding (122 papers)
  2. Yujia Qin (41 papers)
  3. Guang Yang (422 papers)
  4. Fuchao Wei (4 papers)
  5. Zonghan Yang (23 papers)
  6. Yusheng Su (21 papers)
  7. Shengding Hu (34 papers)
  8. Yulin Chen (134 papers)
  9. Chi-Min Chan (18 papers)
  10. Weize Chen (34 papers)
  11. Jing Yi (11 papers)
  12. Weilin Zhao (22 papers)
  13. Xiaozhi Wang (51 papers)
  14. Zhiyuan Liu (433 papers)
  15. Hai-Tao Zheng (94 papers)
  16. Jianfei Chen (63 papers)
  17. Yang Liu (2253 papers)
  18. Jie Tang (302 papers)
  19. Juanzi Li (144 papers)
  20. Maosong Sun (337 papers)
Citations (188)
X Twitter Logo Streamline Icon: https://streamlinehq.com