Papers
Topics
Authors
Recent
Search
2000 character limit reached

ReMix: Reinforcement routing for mixtures of LoRAs in LLM finetuning

Published 10 Mar 2026 in cs.LG and cs.CL | (2603.10160v1)

Abstract: Low-rank adapters (LoRAs) are a parameter-efficient finetuning technique that injects trainable low-rank matrices into pretrained models to adapt them to new tasks. Mixture-of-LoRAs models expand neural networks efficiently by routing each layer input to a small subset of specialized LoRAs of the layer. Existing Mixture-of-LoRAs routers assign a learned routing weight to each LoRA to enable end-to-end training of the router. Despite their empirical promise, we observe that the routing weights are typically extremely imbalanced across LoRAs in practice, where only one or two LoRAs often dominate the routing weights. This essentially limits the number of effective LoRAs and thus severely hinders the expressive power of existing Mixture-of-LoRAs models. In this work, we attribute this weakness to the nature of learnable routing weights and rethink the fundamental design of the router. To address this critical issue, we propose a new router designed that we call Reinforcement Routing for Mixture-of-LoRAs (ReMix). Our key idea is using non-learnable routing weights to ensure all active LoRAs to be equally effective, with no LoRA dominating the routing weights. However, our routers cannot be trained directly via gradient descent due to our non-learnable routing weights. Hence, we further propose an unbiased gradient estimator for the router by employing the reinforce leave-one-out (RLOO) technique, where we regard the supervision loss as the reward and the router as the policy in reinforcement learning. Our gradient estimator also enables to scale up training compute to boost the predictive performance of our ReMix. Extensive experiments demonstrate that our proposed ReMix significantly outperform state-of-the-art parameter-efficient finetuning methods under a comparable number of activated parameters.

Summary

  • The paper introduces a reinforcement learning-based router that prevents weight collapse by enforcing balanced, fixed routing weights across activated LoRAs.
  • It demonstrates significant accuracy improvements (2.8 to 3.3 points) and enhanced parameter efficiency on multi-task LLM finetuning benchmarks.
  • The approach leverages a non-differentiable top-k selection transformed into an RL problem using the REINFORCE leave-one-out trick for stable gradient propagation.

ReMix: Reinforcement Routing for Mixtures of LoRAs in LLM Finetuning

Introduction and Problem Formulation

Mixture-of-LoRAs architectures generalize classical LoRA-based parameter-efficient finetuning (PEFT) of LLMs by enabling dynamic routing: each input can be processed via a combination of multiple LoRA modules (“experts”) per layer. Prior methods optimize the routing weights (often softmax-based), allowing routers to allocate activation mass among available LoRAs in an input-adaptive manner. However, the authors theoretically and empirically demonstrate that these learnable routers suffer from severe weight collapse: for almost every input, the effective support size of the routing distribution is close to one even when k>1k > 1 LoRAs are allowed to be simultaneously activated.

This phenomenon fundamentally limits potential expressivity and parameter utilization—multiple LoRAs are allocated and maintained but rarely used in practice, which all but negates the intended benefits of the mixture mechanism. The problem intensifies during finetuning as well, with routing distributions becoming more peaked over training steps. These observations are made precise through analysis of effective support size (ESS), entropy, and robust statistics across training runs.

The ReMix Approach: Reinforcement-Guided Mixture Routing

To address this collapse, ReMix introduces a new router design based on enforced balanced, non-learnable routing weights across all activated LoRAs. Instead of allocating learned scalar routing weights via a softmax, every selected LoRA receives a fixed weight ω>0\omega > 0. For each input, a categorical distribution (parameterized by a small neural network) produces scores and the top-kk LoRAs (by probability) are activated, each with weight ω\omega, while others receive zero.

This design breaks the connection between learnable mixture weights and over-concentration by construction, ensuring that for each input, exactly kk LoRAs are activated with equal importance. However, the design is non-differentiable (as the top-kk operator is discrete and assigns hard zeros), precluding standard backpropagation to train the router parameters. Figure 1

Figure 1: Overview of the ReMix finetuning procedure, showing RL-based training for non-differentiable mixture routing.

To enable end-to-end training, ReMix reframes router optimization as a reinforcement learning (RL) problem. The router defines a stochastic policy over LoRA subsets; the reward is the negative of the task loss. A specialized unbiased gradient estimator using the REINFORCE leave-one-out (RLOO) trick (i.e., the policy evaluation baseline is the average loss observed across sampled routes) is introduced to reduce variance and efficiently propagate gradients through the non-differentiable router. This allows for scalable, compute-efficient training using minibatched sampling of activated LoRA sets.

During inference, theoretical results show that, under mild assumptions, top-kk selection on the router's categorical distribution recovers the optimal subset; this deterministic inference procedure maximizes the probability of utilizing the best available ensemble.

Theoretical and Empirical Evidence for Weight Collapse

A core result is a rigorous upper bound on the effective support size (ESS) of routers with softmax-parameterized weights under Gaussian initialization. With overwhelming probability, ESS for mixture routers is at most two, even when n2n \gg 2 LoRAs are available and k>1k > 1 are supposed to be concurrently activated. The softmax's competition effect and rare overlap of logits are the underlying cause of this bottleneck.

Empirically, visualizations of routing weights (histograms and ESS trajectories over training steps) confirm that, as finetuning progresses, the routing almost always collapses to a single dominant LoRA, with the remaining weights close to zero—even when k>1k>1. This produces a substantial mismatch between the Mixture-of-LoRAs model’s nominal capacity and its realized capacity in practice.

Experimental Evaluation

Extensive experiments on multi-task LLM adaptation benchmarks (GSM8K for math reasoning, HumanEval for code, ARC-c for knowledge recall) systematically compare ReMix to a wide array of PEFT and mixture baselines. These include prompt/prefix tuning, canonical LoRA, DoRA, rsLoRA, and more recent mixture architectures such as MixLoRA, HydraLoRA, and VB-LoRA. Across all tasks:

  • Accuracy: ReMix consistently provides the highest accuracy, with substantial gains (2.8 to 3.3 points) over leading mixture and single LoRA baselines. Improvements are particularly pronounced on GSM8K and code completion.
  • Parameter efficiency: These gains are achieved with parameter footprints on par with, or below, those of strong baselines.
  • Compute scalability: Due to the RL-based gradient estimator, ReMix can further improve with increased compute (larger minibatch), unlike deterministic routers.
  • Ablation: Removing key components (RLOO for router training, top-kk for inference) degrades performance, confirming their contribution.
  • Subspace diversity: The activated LoRA subsets are demonstrably more diverse than in rank-krkr LoRA, yielding better performance per trainable parameter.
  • Scalability with kk: Increasing the number of activated LoRAs enhances accuracy, supporting the hypothesis that balanced mixtures are critical for model capacity.

Implications and Future Directions

ReMix’s reinforcement-guided router constitutes a shift in PEFT mixture design away from differentiable but imbalanced soft mixtures toward sparsity-enforcing, variance-robust selection policies. This has significant downstream implications:

  • Expressivity: The mixture design actually realizes the theoretical expressivity expected of multi-adapter networks.
  • Efficiency: Balanced parameter utilization, especially under strict compute/memory budgets, is vital for practical deployment of LLMs across domains and tasks.
  • Modular composability: The framework can in principle be composed with any subnetwork or adapter family; it is not specific to canonical LoRA modules.
  • Computational scaling: The RL-based surrogate can naturally leverage larger compute resources for more stable/efficient training, in contrast to traditional mixture models where training cost is fixed for a given architecture.

These results open up further avenues, including extending ReMix-type RL-based routers to deeper mixture-of-experts LLMs, modular (cross-task) adaptation, hierarchical mixtures, and potential integration with dynamic modular architectures. The RL optimization paradigm also aligns with broader trends in LLM alignment and downstream task policy tuning.

Conclusion

ReMix provides strong theoretical clarity and practical evidence that mixture-of-LoRAs routers based on reinforcement-guided, non-learnable routing weights unlock the intended expressivity and efficiency of mixture-based PEFT. By ensuring all activated LoRAs contribute equally and leveraging scalable RL-based optimization, ReMix establishes new state-of-the-art accuracy and parameter efficiency for multi-task LLM finetuning (2603.10160).

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 55 likes about this paper.