Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning (2308.12043v1)

Published 23 Aug 2023 in cs.CL, cs.AI, and cs.LG

Abstract: With the increasing size of pre-trained LLMs (PLMs), fine-tuning all the parameters in the model is not efficient, especially when there are a large number of downstream tasks, which incur significant training and storage costs. Many parameter-efficient fine-tuning (PEFT) approaches have been proposed, among which, Low-Rank Adaptation (LoRA) is a representative approach that injects trainable rank decomposition matrices into every target module. Yet LoRA ignores the importance of parameters in different modules. To address this problem, many works have been proposed to prune the parameters of LoRA. However, under limited training conditions, the upper bound of the rank of the pruned parameter matrix is still affected by the preset values. We, therefore, propose IncreLoRA, an incremental parameter allocation method that adaptively adds trainable parameters during training based on the importance scores of each module. This approach is different from the pruning method as it is not limited by the initial number of training parameters, and each parameter matrix has a higher rank upper bound for the same training overhead. We conduct extensive experiments on GLUE to demonstrate the effectiveness of IncreLoRA. The results show that our method owns higher parameter efficiency, especially when under the low-resource settings where our method significantly outperforms the baselines. Our code is publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Feiyu Zhang (2 papers)
  2. Liangzhi Li (28 papers)
  3. Junhao Chen (36 papers)
  4. Zhouqiang Jiang (8 papers)
  5. Bowen Wang (76 papers)
  6. Yiming Qian (32 papers)
Citations (25)