Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

mLoRA: Fine-Tuning LoRA Adapters via Highly-Efficient Pipeline Parallelism in Multiple GPUs (2312.02515v2)

Published 5 Dec 2023 in cs.LG and cs.AI

Abstract: Transformer-based, pre-trained LLMs have demonstrated outstanding performance across diverse domains, particularly in the emerging {\em pretrain-then-finetune} paradigm. Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method, is commonly used to adapt a base LLM to multiple downstream tasks. Further, LLM platforms enable developers to fine-tune multiple models and develop various domain-specific applications simultaneously. However, existing model parallelism schemes suffer from high communication overhead and inefficient GPU utilization when training multiple LoRA tasks across GPUs and machines. In this paper, we present mLoRA, a parallelism-efficient fine-tuning system designed for training multiple LoRA across GPUs and machines. mLoRA introduces a novel LoRA-aware pipeline parallelism scheme that efficiently pipelines independent LoRA adapters and their distinct fine-tuning stages across GPUs and machines, along with a new LoRA-efficient operator to enhance GPU utilization during pipelined LoRA training. Our extensive evaluation shows that mLoRA can significantly reduce average fine-tuning task completion time, e.g., by 30\%, compared to state-of-the-art methods like FSDP. More importantly, mLoRA enables simultaneous fine-tuning of larger models, e.g., two Llama-2-13B models on four NVIDIA RTX A6000 48GB GPUs, which is not feasible for FSDP due to high memory requirements. Hence, mLoRA not only increases fine-tuning efficiency but also makes it more accessible on cost-effective GPUs. mLoRA has been deployed in AntGroup's production environment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zhengmao Ye (2 papers)
  2. Dengchun Li (3 papers)
  3. Tingfeng Lan (7 papers)
  4. Jie Zuo (2 papers)
  5. Lei Duan (12 papers)
  6. Hui Lu (38 papers)
  7. Jian Sha (6 papers)
  8. Mingjie Tang (22 papers)
  9. Zetao Hu (1 paper)
  10. Sicong Zhang (6 papers)
  11. Yuanchun Zhou (62 papers)
Citations (6)