Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ResLoRA: Identity Residual Mapping in Low-Rank Adaption (2402.18039v1)

Published 28 Feb 2024 in cs.CL and cs.AI

Abstract: As one of the most popular parameter-efficient fine-tuning (PEFT) methods, low-rank adaptation (LoRA) is commonly applied to fine-tune LLMs. However, updating the weights of LoRA blocks effectively and expeditiously is challenging due to the long calculation path in the original model. To address this, we propose ResLoRA, an improved framework of LoRA. By adding residual paths during training and using merging approaches to eliminate these extra paths during inference, our method can achieve better results in fewer training steps without any extra trainable parameters or inference cost compared to LoRA. The experiments on NLG, NLU, and text-to-image tasks demonstrate the effectiveness of our method. To the best of our knowledge, ResLoRA is the first work that combines the residual path with LoRA. The code of our method is available at https://github.com/microsoft/LMOps/tree/main/reslora .

Enhancing Low-Rank Adaptation with Residual Paths: Introducing ResLoRA

Introduction to Low-Rank Adaptation Methods

LLMs have taken center stage in the field of NLP and beyond, thanks to their unparalleled prowess in handling a myriad of tasks. As beneficial as they are, the fine-tuning process of these behemoths often incurs prohibitive costs, given their sprawling parameter spaces. Low-Rank Adaptation (LoRA) emerged as a promising solution, pinpointing a method that tweaks only a fraction of the model's parameters, thereby offering a more cost-effective way to adapt LLMs to specific tasks. The essence of LoRA lies in its employment of two matrices that operate in tandem with the model's linear layers, which, after training, merge seamlessly, imposing no additional computational demands during inference.

ResLoRA: Bridging Residual Learning with LoRA

Despite the efficacy of LoRA, its potential isn't fully unleashed due to the hampering long backpropagation paths, which often impede the swift convergence and overall performance enhancement. Addressing this shortcoming, we introduce ResLoRA, an innovative framework that integrates the residual learning concept from ResNet into the LoRA method. This integration involves the introduction of residual paths in the LoRA blocks during the training phase, which are meticulously merged back, ensuring a plain structure synonymous with the original LoRA blocks during inference. This approach not only retains the parameter efficiency of LoRA but also significantly boosts model performance and training efficiency.

Methodological Insights and Achievements

ResLoRA advocates for several key advancements. First, it proposes three types of residual structures within LoRA blocks, namely input-shortcut, block-shortcut, and middle-shortcut, each addressing the gradient flow in distinct yet coherent manners. These structures ensure that the gradient backpropagation is expedited, thus overcoming the inherent limitations of the traditional LoRA framework. Second, it devises novel merging strategies aimed at reabsorbing the introduced residual paths back into the original LoRA configuration without accruing additional inference costs. Experimental revelations underscore the model's performance leaps, with improvements ranging from 1% to 20% across various tasks including Natural Language Generation (NLG), Natural Language Understanding (NLU), and even text-to-image generations, showcasing the framework's versatility and robustness.

Theoretical Foundations and Practical Implications

The mathematical underpinning provided for ResLoRA not only rationalizes the observed performance enhancements but also lays the groundwork for future explorations in merging deep learning architectures with parameter-efficient tuning methods. The implications of this research extend far into the practical domain, offering a viable pathway to harnessing the power of LLMs in resource-constrained settings without compromising on model performance or adaptability.

Future Directions and Conclusion

Despite its considerable contributions, ResLoRA is not without its limitations, most notably the additional computational overhead during training and the slight accuracy trade-off resulting from the merge operations. These limitations, however, open avenues for further research into optimizing the merging mechanisms and exploring the integration of ResLoRA with other LoRA variants and models. In its essence, ResLoRA paves a novel pathway in the landscape of PEFT methods, offering both a superior alternative and a foundational basis for future advancements in the fine-tuning of large-scale models efficiently and effectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics.
  2. Anonymous. 2024. MoLE: Mixture of loRA experts. In The Twelfth International Conference on Learning Representations.
  3. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
  4. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314.
  5. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  6. Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13733–13742.
  7. William Ford. 2014. Numerical linear algebra with applications: Using MATLAB. Academic Press.
  8. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778.
  9. Identity mappings in deep residual networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, pages 630–645. Springer.
  10. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790–2799. PMLR.
  11. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
  12. Lorahub: Efficient cross-task generalization via dynamic loRA composition. In R0-FoMo:Robustness of Few-shot and Zero-shot Learning in Large Foundation Models.
  13. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708.
  14. Fedpara: Low-rank hadamard product for communication-efficient federated learning.
  15. Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
  16. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
  17. Stack more layers differently: High-rank training through low-rank updates. arXiv preprint arXiv:2307.05695.
  18. Program induction by rationale generation: Learning to solve and explain algebraic word problems. ACL.
  19. Gpt understands, too. AI Open.
  20. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  21. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft.
  22. A comprehensive overview of large language models. arXiv preprint arXiv:2307.06435.
  23. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
  24. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics.
  25. Justin N. M. Pinkney. 2022. Pokemon blip captions. https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/.
  26. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–16. IEEE.
  27. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505–3506.
  28. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684–10695.
  29. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer.
  30. Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557.
  31. Highway networks. arXiv preprint arXiv:1505.00387.
  32. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  33. Dylora: Parameter-efficient tuning of pre-trained models using dynamic search-free low-rank adaptation. ArXiv, abs/2210.07558.
  34. Attention is all you need. Advances in neural information processing systems, 30.
  35. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
  36. AdaMix: Mixture-of-adaptations for parameter-efficient model tuning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5744–5760, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  37. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
  38. Fine-tuned LLMs know more, hallucinate less with few-shot sequence-to-sequence semantic parsing over Wikidata. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5778–5791, Singapore. Association for Computational Linguistics.
  39. Navigating text-to-image customization: From lycoris fine-tuning to model evaluation. arXiv preprint arXiv:2309.14859.
  40. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284.
  41. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
  42. Yuchen Zeng and Kangwook Lee. 2023. The expressive power of low-rank adaptation. In OPT 2023: Optimization for Machine Learning.
  43. Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512.
  44. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. arXiv preprint arXiv:2303.16199.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Shuhua Shi (2 papers)
  2. Shaohan Huang (79 papers)
  3. Minghui Song (18 papers)
  4. Zhoujun Li (122 papers)
  5. Zihan Zhang (121 papers)
  6. Haizhen Huang (18 papers)
  7. Furu Wei (291 papers)
  8. Weiwei Deng (29 papers)
  9. Feng Sun (34 papers)
  10. Qi Zhang (785 papers)
Citations (10)