Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Run LoRA Run: Faster and Lighter LoRA Implementations (2312.03415v2)

Published 6 Dec 2023 in cs.LG

Abstract: LoRA is a technique that reduces the number of trainable parameters in a neural network by introducing low-rank adapters to linear layers. This technique is used both for fine-tuning and full training of LLMs. This paper presents the RunLoRA framework for efficient implementations of LoRA that significantly improves the speed of neural network training and fine-tuning using low-rank adapters. The proposed implementation optimizes the computation of LoRA operations based on dimensions of corresponding linear layer, layer input dimensions and lora rank by choosing best forward and backward computation graph based on FLOPs and time estimations, resulting in faster training without sacrificing accuracy. The experimental results show up to 28\% speedup on LLMing networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Daria Cherniuk (6 papers)
  2. Aleksandr Mikhalev (6 papers)
  3. Ivan Oseledets (187 papers)
Citations (1)