Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning (2406.16989v2)

Published 24 Jun 2024 in cs.LG and cs.AI

Abstract: Low-Rank Adaptation (LoRA) offers an efficient way to fine-tune LLMs. Its modular and plug-and-play nature allows the integration of various domain-specific LoRAs, enhancing LLM capabilities. Open-source platforms like Huggingface and Modelscope have introduced a new computational paradigm, Uploadable Machine Learning (UML). In UML, contributors use decentralized data to train specialized adapters, which are then uploaded to a central platform to improve LLMs. This platform uses these domain-specific adapters to handle mixed-task requests requiring personalized service. Previous research on LoRA composition either focuses on specific tasks or fixes the LoRA selection during training. However, in UML, the pool of LoRAs is dynamically updated with new uploads, requiring a generalizable selection mechanism for unseen LoRAs. Additionally, the mixed-task nature of downstream requests necessitates personalized services. To address these challenges, we propose Retrieval-Augmented Mixture of LoRA Experts (RAMoLE), a framework that adaptively retrieves and composes multiple LoRAs based on input prompts. RAMoLE has three main components: LoraRetriever for identifying and retrieving relevant LoRAs, an on-the-fly MoLE mechanism for coordinating the retrieved LoRAs, and efficient batch inference for handling heterogeneous requests. Experimental results show that RAMoLE consistently outperforms baselines, highlighting its effectiveness and scalability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ziyu Zhao (28 papers)
  2. Leilei Gan (21 papers)
  3. Guoyin Wang (108 papers)
  4. Yuwei Hu (15 papers)
  5. Tao Shen (87 papers)
  6. Hongxia Yang (130 papers)
  7. Kun Kuang (114 papers)
  8. Fei Wu (317 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com