Papers
Topics
Authors
Recent
2000 character limit reached

DiffoRA: Enabling Parameter-Efficient Fine-Tuning via Differential Module Selection (2502.08905v2)

Published 13 Feb 2025 in cs.CV

Abstract: The Parameter-Efficient Fine-Tuning (PEFT) methods have been extensively researched for LLMs in downstream tasks. Among all the existing approaches, the Low-Rank Adaptation (LoRA) has gained popularity for its streamlined design by incorporating low-rank matrices into existing pre-trained models. Though effective, LoRA, as well as its adaptive optimizations, either allocate the same matrix to all the modules or adjust the interior rank of the components based on importance scoring indicators. In this paper, we argue that not all the modules in LLMs are suitable and necessary to be fine-tuned. Enlightened by this insight, we propose a new PEFT scheme called DiffoRA, which enables adaptive adoption of the low-rank decomposition matrices. At the core of DiffoRA lies a Differential Adaptation Matrix (DAM) to determine which module is the most suitable and essential for fine-tuning. We theoretically explain how the designed matrix impacts the convergence rate and generalization capability of a pre-trained model. We then construct the DAM via continuous relaxation and discretization with weight-sharing optimizations. We fully implement DiffoRA and design comprehensive experiments to evaluate its performance. The experimental results demonstrate that DiffoRA delivers state-of-the-art results across multiple benchmarks.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.