DMoERM: Mixture-of-Experts RL Teacher
- The paper introduces DMoERM, a dual-layer MoE RL teacher that uses a two-stage expert routing to mitigate multi-task interference and noisy annotations.
- DMoERM employs an outer sparse task router and an inner dense LoRA-based MoE structure to decompose tasks into fine-grained capability experts, stabilizing reward signals.
- Empirical results show DMoERM improves agreement with human rankings and policy optimization stability, outperforming traditional reward models in RLHF.
A Mixture-of-Experts (MoE) RL Teacher is a reward modeling framework with a hierarchical MoE architecture designed to address fundamental challenges in preference-based reinforcement learning from human feedback (RLHF) for LLMs. The distinctive contribution of the DMoERM ("Double-Layer MoE Reward Model") is its two-stage expert routing; an outer sparse router partitions input by task (e.g., text creation, roleplay), while an inner dense MoE structure decomposes each task into capability sub-dimensions (e.g., intent conformity, expressiveness), each handled by a fine-tuned LoRA expert. This approach targets two pervasive issues in reward model (RM) training for LLM alignment fine-tuning: multi-task disturbance from heterogeneous data, and low inter-annotator agreement introducing label noise. By isolating task and capability contributions, DMoERM enhances reward signal fidelity, stabilizes policy optimization, and achieves superior alignment with human preferences (Quan, 2024).
1. Challenges in Reward Model Training for RLHF
Reward models are central to the alignment of LLMs via RLHF, guiding policy updates based on predicted human preference. Empirically, two obstacles degrade RM effectiveness:
- Multi-task Interference: Aggregating data from disparate dialogue domains and preference axes in a single RM induces negative transfer. The model's generalization performance declines when simultaneously exposed to tasks with orthogonal objectives (e.g., roleplay vs. objective QA), as the shared representation is insufficiently specialized.
- Noisy Preference Supervision: Human annotators exhibit only to pairwise agreement on preference data, so overall reward signals are substantially noisy. This impairs the learnability and predictive validity of RMs as alignment teachers.
The DMoERM architecture is constructed to directly address both issues by structurally partitioning tasks and capability factors.
2. Double-Layer Mixture-of-Experts Architecture
The DMoERM model deploys a two-level MoE hierarchy:
Outer Sparse MoE (Task Router)
- For distinct tasks (e.g., text creation, roleplay, chitchat), input is processed by a frozen router (small transformer or MLP) generating task logits .
- The gating network applies a softmax: .
- The top-scoring task index selects a single task-specific RM, avoiding multi-task disturbance and fixed inference cost.
Inner Dense MoE (Capability Experts via LoRA)
- Each task is decomposed into capability dimensions (e.g., "intent conformity," "expressiveness").
- The base RM for task , with parameters , is extended by LoRA adapters , , each expert handling a distinct capability.
- Processing with the -th expert yields embedding and scalar capability score .
MLP Aggregator
- The expert embeddings are concatenated and fed to a two-layer MLP with PReLU activation:
- The MLP models non-linear interactions among capability dimensions, outputting a holistic scalar reward.
3. Training Paradigm
DMoERM training proceeds in three sequential stages per task :
- Task-Specific Base Model Fine-Tuning: of preference pairs for task are used to full-parameter fine-tune , yielding a monolithic task-specific RM.
- Capability Expert LoRA Fine-Tuning:
- Pairs are annotated with single-capability preferences by querying a public LLM API (Baidu ERNIE Bot).
- To lessen annotation bias, each pair is scored in both swap orders, retaining only consistent pairs.
- LoRA adapters are trained on these cleaned, capability-labeled examples, one per capability.
- Aggregator Head Training:
- Freeze the base model and all LoRA adapters.
- Use the remaining of task-specific data to train only the aggregator MLP.
Across all stages, the pairwise reward difference is trained via the logistic loss:
No explicit sparsity penalty is imposed; the router remains pretrained and frozen throughout.
4. Handling Label Noise and Multi-Task Interference
Human annotation of overall dialog quality yields only $60$– consistency, but decomposing responses into fine-grained capability scores (five for text creation: "intent conformity," "expressiveness," "readability," "content richness," "logic") led to $80$– consistency on each capability and for the aggregated judgment. This suggests that factorized evaluation surfaces more robustly learnable signals and enables more consistent reward modeling.
By freezing the outer router, each per-task group is isolated from off-task samples, circumventing negative transfer evidenced by multi-task mixtures degrading accuracy from (single-task) to (multi-task) in ablation.
Automated capability labeling combined with positional-bias swap-filtering imparts data efficiency while cleansing noisy judgments, a fundamental advance over manual-only pipelines.
5. Empirical Results
Experimental evaluation of DMoERM demonstrates:
- Preference Consistency: On manually labeled sets, DMoERM achieved agreement with human rankings, outperforming single reward models (), mean ensembles (), and advanced ensemble methods like UWO (). It surpassed zero-shot GPT-4 () and one-shot GPT-4 (). The inner MoE alone (outer router ablated) retained consistency.
- Best-of-n (BoN) Sampling: DMoERM-optimized policies yielded higher gold RM scores for nats; baselines over-optimized and failed by nats, whereas DMoERM remained stable beyond nats.
- PPO Fine-Tuning: During PPO with KL penalty (steps = $3,000$), DMoERM-tuned policies outperformed all ensemble baselines on average gold RM scores and improved out-of-distribution generalization on AlignBench prompts.
- Human Evaluation: Human judges preferred DMoERM outputs at rates up to in BoN and in PPO at select checkpoints.
| Model | Human Agreement (%) |
|---|---|
| Single RM | 58.2 |
| Mean ensemble | 62.4 |
| UWO/WCO ensemble | 62.6 |
| Zero-shot GPT-4 | 59.5 |
| One-shot GPT-4 | 62.3 |
| DMoERM (full) | 70.7 |
| DMoERM (no router) | 67.0 |
6. Interpretability and Reinforcement Learning Supervision
DMoERM confers practical interpretability advantages: capability expert scores expose the contributing factors for each reward outcome, enabling inspection of why particular responses are preferred. As an RL teacher, DMoERM's structured reward signals address the overoptimization trap in best-of-n and RL fine-tuning settings and yield more stable policy improvements. The combination of per-task isolation and capability decomposition makes it a systematically better teacher in downstream RLHF, as reflected in human preference win rates, reward stability, and alignment generalization (Quan, 2024).
7. Conclusions and Implications
The double-layer mixture-of-experts framework, instantiated as DMoERM, leverages outer sparse gating and inner dense LoRA expert specialization to counteract multi-task and noisy-label interference in preference-based LLM alignment. This design achieves superior agreement with human preferences, mitigates failure-modes common in over-optimized reward modeling, and provides fine-grained interpretability of learned preferences. The use of API-based, swap-filtered annotation pipelines further improves data efficiency and label quality. A plausible implication is that structurally factorized reward models, with explicit task and capability disentanglement, can serve as more robust and effective teachers for both research-grade and large-scale RLHF efforts.
For implementation details, datasets, and code, see the public repository at https://github.com/quanshr/DMoERM-v1 (Quan, 2024).