Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks (2402.11455v1)

Published 18 Feb 2024 in cs.CL

Abstract: LoRA employs lightweight modules to customize LLMs for each downstream task or domain, where different learned additional modules represent diverse skills. Combining existing LoRAs to address new tasks can enhance the reusability of learned LoRAs, particularly beneficial for tasks with limited annotated data. Most prior works on LoRA combination primarily rely on task-level weights for each involved LoRA, making different examples and tokens share the same LoRA weights. However, in generative tasks, different tokens may necessitate diverse skills to manage. Taking the Chinese math task as an example, understanding the problem description may depend more on the Chinese LoRA, while the calculation part may rely more on the math LoRA. To this end, we propose LoRA-Flow, which utilizes dynamic weights to adjust the impact of different LoRAs. The weights at each step are determined by a fusion gate with extremely few parameters, which can be learned with only 200 training examples. Experiments across six generative tasks demonstrate that our method consistently outperforms baselines with task-level fusion weights. This underscores the necessity of introducing dynamic fusion weights for LoRA combination.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hanqing Wang (32 papers)
  2. Bowen Ping (5 papers)
  3. Shuo Wang (382 papers)
  4. Xu Han (270 papers)
  5. Yun Chen (134 papers)
  6. Zhiyuan Liu (433 papers)
  7. Maosong Sun (337 papers)
Citations (9)