Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 96 tok/s
Gemini 3.0 Pro 48 tok/s Pro
Gemini 2.5 Flash 155 tok/s Pro
Kimi K2 197 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

MoSEs: Modular Learning with SubExperts

Updated 18 November 2025
  • MoSEs are modular learning systems that deploy specialized subexperts via dynamic routing to manage diverse, high-dimensional tasks.
  • They leverage sparse gating with ℓ1 regularization and EM-based training to enforce subspace specialization and efficient feature selection.
  • MoSEs have proven effective across domains like language modeling, graph analysis, and continual learning, offering improved interpretability and scalability.

Mixture of SubExperts (MoSEs) refers to a broad class of modular learning architectures in which a set of specialized expert modules—termed “subexperts”—are dynamically routed and aggregated to solve complex tasks. MoSEs generalize the Mixture of Experts (MoE) paradigm by emphasizing structured sparsity, expert specialization, subspace selectivity, and adaptive routing, enabling efficient, scalable, and interpretable solutions for high-dimensional, heterogeneous, or sequential learning scenarios. The concept has been instantiated across diverse domains—including classical classification, deep LLMs, continual learning, graph representation, and combinatorial optimization—unifying models where only a task-adaptive, input-adaptive, or data-type-adaptive subset of modules is activated per example or context.

1. Core Principles and Model Formulation

The canonical formulation of Mixture of SubExperts is rooted in the regularized mixture of experts architecture for complex classification tasks (Peralta, 2014). Given input–output pairs {(xn,yn)}n=1N\{(x_n, y_n)\}_{n=1}^N with xnRDx_n \in \mathbb{R}^D, yn{c1,...,cQ}y_n \in \{c_1, ..., c_Q\}, the conditional posterior is modeled as

p(yx)=k=1Kp(mkx)  p(ymk,x)p(y \mid x) = \sum_{k=1}^K p(m_k \mid x) \; p(y \mid m_k, x)

where p(mkx)p(m_k \mid x) is a gating function assigning relevance or “responsibility” to each subexpert kk, and p(ymk,x)p(y \mid m_k, x) is the output distribution from expert kk. The log-linear parametrization gives

p(mkx)=exp(νkTx)j=1Kexp(νjTx)p(m_k\mid x) = \frac{\exp(\nu_k^T x)}{\sum_{j=1}^K \exp(\nu_j^T x)}

p(y=cmk,x)=exp(ωk,Tx)r=1Qexp(ωk,rTx)p(y = c_\ell \mid m_k, x) = \frac{\exp(\omega_{k,\ell}^T x)}{\sum_{r=1}^Q \exp(\omega_{k,r}^T x)}

Subexpert-ness arises when sparsity constraints—specifically 1\ell_1 penalties—are imposed on both gates νk\nu_k and expert parameters ωk,\omega_{k,\ell}:

LcR=Lcλνkνk1λωk,ωk,1\langle\mathcal{L}_c^R\rangle = \langle\mathcal{L}_c\rangle - \lambda_\nu \sum_k \|\nu_k\|_1 - \lambda_\omega \sum_{k,\ell} \|\omega_{k,\ell}\|_1

This enforces that each expert and gate operates in a low-dimensional subspace, specializing in different input regions (Peralta, 2014). Learning is performed by EM: E-step computing responsibilities Rkn=p(mkxn,yn)R_{kn} = p(m_k \mid x_n, y_n), and M-step solving 1\ell_1-regularized convex subproblems for each gate and expert by weighted least squares.

MoSEs generalize this structure to other settings, including Transformer-based sparse MoEs (Do et al., 29 Mar 2025, Kang, 9 Nov 2025), low-rank adapters for compositional LLMs (Kang et al., 17 Jun 2024), and graph substructure methods (Ye et al., 11 Sep 2025).

2. Routing Mechanisms and Expert Activation

A defining property of MoSEs is dynamic, data-dependent routing. Routing mechanisms vary but universally aim to select a sparse, specialized subset of subexperts for processing each datum:

  • Softmax gating: As in the original MoE, input-dependent softmax gates yield a probability distribution over KK experts, often followed by top-kk truncation for sparsity and computational efficiency (Peralta, 2014, Do et al., 29 Mar 2025).
  • Binary masking and selection: In continual LLM adaptation, binary masks m,km_{\ell,k} specify which parameters in each expert to activate for a given task, with routing networks selecting top-MM experts per task and layer (Kang, 9 Nov 2025).
  • Latent semantic routing: For LLM specialists, a linear router computes gating weights via s(x)=θrxs(x) = \theta_r x; top-kk entries determine active experts per token (Kang et al., 17 Jun 2024).
  • Task-conditional routing: In vehicle routing (VRP), routing combines task/state embeddings through a gating network producing mixture weights αi\alpha_{i} for each LoRA-based subexpert (Pan et al., 24 Oct 2025).
  • Topology-aware gating: In graph MoSEs, subgraph experts are selected per node by a gate considering both the local node representation and its neighborhood, with sparse softmax selection ensuring specialization (Ye et al., 11 Sep 2025).

Unified competitive learning MoSEs blend token choice and expert choice, scoring both per-token and per-expert, and selecting top assignments via a competitive score, maximizing diversity and avoiding expert collapse (Do et al., 29 Mar 2025).

3. Sparsity, Specialization, and Feature Subspaces

Sparsity-induced subexpert specialization is central to MoSE efficacy. In linear models, 1\ell_1 regularization yields expert and gate weight vectors with many zeros, directly enforcing subspace specialization; each subexpert thus “operates” in a subset of the input dimensions best suited to its region (Peralta, 2014).

In deep architectures, subexperts are realized as parameter-efficient modules (e.g., LoRA adapters (Kang, 9 Nov 2025, Kang et al., 17 Jun 2024)), binary-masked subnetworks, or specialized neural heads (e.g., dataset-specific FFNs in DAMEX (Jain et al., 2023)). Routing and sparsity mechanisms prevent overlap and interference, while maintaining the potential for adaptive recombination of prior “subexpert” knowledge in new contexts or tasks.

Subspace selectivity appears in transformer-based SMoEs: only a small subset of FFNs (experts) are applied to each token, governed by routing. Theoretical and empirical evidence shows that balanced expert utilization mitigates collapse and enables specialization (Do et al., 29 Mar 2025).

4. Training Schemes and Optimization

MoSEs are trained under objectives that promote both task performance and specialization:

  • Likelihood maximization with structured sparsity: EM alternates between responsibility estimation and independent convex optimization for sparse gates/experts (Peralta, 2014).
  • Router and expert joint training: Jointly learning routers and subexperts, with hard or soft gating, is the norm in deep models (Do et al., 29 Mar 2025, Kang, 9 Nov 2025).
  • Auxiliary balancing losses: To prevent expert collapse and overload, auxiliary losses (e.g., load-balancing loss in DAMEX (Jain et al., 2023), coefficient-of-variation loss in graph MoSEs (Ye et al., 11 Sep 2025)) ensure uniform expert use across data.
  • Latent-space compositionality: In VRP solvers (MoSES), basis experts are pretrained independently, then recombined via a learned mixture function and residual adapter in a unified solver, under the theoretical guarantee that this compositionality recovers the optimal policy under mild assumptions (Pan et al., 24 Oct 2025).
  • Continual learning-specific regularization: Pull loss aligns task keys and embedding means for task-inference robustness in continual LLM MoSEs (Kang, 9 Nov 2025).

Algorithmic efficiency is maintained via sparse activation, sequence-level flattening, and per-task masking, keeping both compute and memory costs sublinear in the number of tasks or experts (Kang, 9 Nov 2025, Do et al., 29 Mar 2025).

5. Applications and Empirical Performance

MoSEs have been deployed across a diverse range of domains:

Domain Instantiation Focal Mechanism
High-dimensional classification L1-regularized MoE Per-expert/gate feature selection
LLM continual learning Binary-masked LoRA subexperts (Kang, 9 Nov 2025) Task-specific routing, adaptive re-use
Compositional/self-specialized LLMs LoRA-based modular experts (Kang et al., 17 Jun 2024) Self-synthesized data, top-kk routing
Graph learning Subgraph-based experts (Ye et al., 11 Sep 2025) Topology-aware extraction and gating
Vision multitask detection Dataset-token-to-expert routing (Jain et al., 2023) Dataset-aware cross-entropy supervision
AI-text detection Stylistic prototype-based experts (Wu et al., 2 Sep 2025) Style-aware routing, conditional thresholds
Vehicle routing optimization LoRA-basis experts in latent space (Pan et al., 24 Oct 2025) Task/state-adaptive mixture of adapters

Across these contexts, MoSEs yield marked gains in:

  • Specialization: Fewer irrelevant features per expert (Peralta, 2014); dataset- or domain-specific expert assignment (Jain et al., 2023).
  • Knowledge retention: Minimal forgetting in continual LLMs, with sublinear growth of learned parameters (Kang, 9 Nov 2025).
  • Efficiency: Up to 14%14\% reduction in FLOPs and strong scalability in deep MoE layers (Do et al., 29 Mar 2025).
  • Generalization: Substantial accuracy improvements, especially in low-resource settings (e.g., +39% in AI-text detection (Wu et al., 2 Sep 2025)), and robust OOD generalization in combinatorial optimization (Pan et al., 24 Oct 2025).
  • Interpretability: Visualizable substructure-to-expert alignments in graphs (Ye et al., 11 Sep 2025); clear semantic mapping from tasks to modular experts in LLMs (Kang et al., 17 Jun 2024).

6. Theoretical Analysis and Guarantees

MoSEs inherit substantive theoretical properties from both mixture modeling and modular/expert learning:

  • Subgraph expressivity: MoSEs based on random walk kernels and hidden graph modules are at least as powerful as the Subgraph Weisfeiler–Lehman (SWL) test; this guarantees ability to distinguish graphs beyond the 1-WL barrier (Ye et al., 11 Sep 2025).
  • Optimal compositionality: In latent-space-decomposable MDPs for combinatorial RL, a MoSE mixture over basis experts with a learnable fusion function provably recovers the optimal unified policy, provided certain bijectivity and independence conditions are met (Pan et al., 24 Oct 2025).
  • Competitive learning: Unified competitive routing in MoSEs achieves at least the top assignment score of both token-choice and expert-choice baselines for any selection, theoretically favoring balanced and informative routing (Do et al., 29 Mar 2025).
  • Forgetting bounds: Strict subexpert isolation in continual MoSEs empirically yields minimal or even positive backward transfer, contrasting with catastrophic forgetting in monolithic or adapter-only approaches (Kang, 9 Nov 2025).

7. Extensions, Limitations, and Future Directions

MoSEs provide a flexible framework but entail critical design decisions, open challenges, and fronts for innovation:

  • Routing mechanism and pool size: Determining the optimal number of subexperts and calibrating gating for new, outlier tasks is nontrivial. Static pools might exhaust expressive capacity in highly novel regimes (Kang, 9 Nov 2025).
  • Dynamic expert expansion: Promising directions include auto-expanding the expert pool upon sustained routing uncertainty or high out-of-distribution detection (Kang, 9 Nov 2025).
  • Regularization for balancing and diversity: Stronger entropy or load-balancing regularizers may further mitigate collapse and overload (Jain et al., 2023, Do et al., 29 Mar 2025).
  • Multi-modal, multilingual, and hierarchical extensions: MoSEs are poised for hierarchical stacking or cross-modal specialization, as proposed in multi-modal competitive learning (Do et al., 29 Mar 2025).
  • Inference-time constraints: Dense routing increases compute, but sparsity may trade off with model quality; expert pruning and prototype compression offer practical mitigation (Wu et al., 2 Sep 2025, Jain et al., 2023).

In summary, MoSEs unify a spectrum of sparse, dynamically routed, and highly specialized expert architectures, displaying efficiency, modularity, and theoretical robustness across a wide array of demanding machine learning scenarios.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Mixture of SubExperts (MoSEs).