Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts (2404.15159v3)

Published 22 Apr 2024 in cs.CL and cs.AI

Abstract: Fine-tuning LLMs is a common practice to adapt pre-trained models for specific applications. While methods like LoRA have effectively addressed GPU memory constraints during fine-tuning, their performance often falls short, especially in multi-task scenarios. In contrast, Mixture-of-Expert (MoE) models, such as Mixtral 8x7B, demonstrate remarkable performance in multi-task learning scenarios while maintaining a reduced parameter count. However, the resource requirements of these MoEs remain challenging, particularly for consumer-grade GPUs with less than 24GB memory. To tackle these challenges, we propose MixLoRA, an approach to construct a resource-efficient sparse MoE model based on LoRA. MixLoRA inserts multiple LoRA-based experts within the feed-forward network block of a frozen pre-trained dense model and employs a commonly used top-k router. Unlike other LoRA-based MoE methods, MixLoRA enhances model performance by utilizing independent attention-layer LoRA adapters. Additionally, an auxiliary load balance loss is employed to address the imbalance problem of the router. Our evaluations show that MixLoRA improves about 9% accuracy compared to state-of-the-art PEFT methods in multi-task learning scenarios. We also propose a new high-throughput framework to alleviate the computation and memory bottlenecks during the training and inference of MOE models. This framework reduces GPU memory consumption by 40% and token computation latency by 30% during both training and inference.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Dengchun Li (3 papers)
  2. Yingzi Ma (4 papers)
  3. Naizheng Wang (2 papers)
  4. Zhiyuan Cheng (15 papers)
  5. Lei Duan (12 papers)
  6. Jie Zuo (2 papers)
  7. Cal Yang (1 paper)
  8. Mingjie Tang (22 papers)
  9. Zhengmao Ye (2 papers)
  10. Yinghao Tang (6 papers)
  11. Yan Zhang (954 papers)
Citations (27)
X Twitter Logo Streamline Icon: https://streamlinehq.com