MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts (2404.15159v3)
Abstract: Fine-tuning LLMs is a common practice to adapt pre-trained models for specific applications. While methods like LoRA have effectively addressed GPU memory constraints during fine-tuning, their performance often falls short, especially in multi-task scenarios. In contrast, Mixture-of-Expert (MoE) models, such as Mixtral 8x7B, demonstrate remarkable performance in multi-task learning scenarios while maintaining a reduced parameter count. However, the resource requirements of these MoEs remain challenging, particularly for consumer-grade GPUs with less than 24GB memory. To tackle these challenges, we propose MixLoRA, an approach to construct a resource-efficient sparse MoE model based on LoRA. MixLoRA inserts multiple LoRA-based experts within the feed-forward network block of a frozen pre-trained dense model and employs a commonly used top-k router. Unlike other LoRA-based MoE methods, MixLoRA enhances model performance by utilizing independent attention-layer LoRA adapters. Additionally, an auxiliary load balance loss is employed to address the imbalance problem of the router. Our evaluations show that MixLoRA improves about 9% accuracy compared to state-of-the-art PEFT methods in multi-task learning scenarios. We also propose a new high-throughput framework to alleviate the computation and memory bottlenecks during the training and inference of MOE models. This framework reduces GPU memory consumption by 40% and token computation latency by 30% during both training and inference.
- Dengchun Li (3 papers)
- Yingzi Ma (4 papers)
- Naizheng Wang (2 papers)
- Zhiyuan Cheng (15 papers)
- Lei Duan (12 papers)
- Jie Zuo (2 papers)
- Cal Yang (1 paper)
- Mingjie Tang (22 papers)
- Zhengmao Ye (2 papers)
- Yinghao Tang (6 papers)
- Yan Zhang (954 papers)