Papers
Topics
Authors
Recent
2000 character limit reached

Mixture of Neuron Experts (MoNE)

Updated 8 October 2025
  • The paper demonstrates that MoNE achieves universal approximation by using neuron-level gating to select highly active neurons, reducing overall active parameters.
  • It employs intra-expert top-k selection to enhance computational efficiency while matching or exceeding the accuracy of traditional MoE models.
  • MoNE’s design enables practical model pruning and adaptive inference, offering a scalable framework for efficient neural network deployment in resource-constrained settings.

Mixture of Neuron Experts (MoNE) is a neural network architecture and theoretical framework built on the classical Mixture-of-Experts (MoE) paradigm but refined to operate at a finer granularity, whereby expert selection and activation are performed at the neuron or subnetwork level within each expert. MoNE leverages intra-expert sparsity and neuron-level gating to improve both parameter utilization and computational efficiency. Core theoretical contributions include universal approximation theorems, practical sparsification studies, and empirical demonstrations that neuron-granular mixtures match or exceed the performance of traditional MoE with lower active parameter counts (Cheng et al., 7 Oct 2025, Nguyen et al., 2016).

1. Theoretical Foundations and Universal Approximation

The foundational universal approximation theorem for MoNE establishes that for any continuous target function ff defined on a compact set KK and any desired accuracy ϵ>0\epsilon > 0, there exists a configuration of MoNE parameters such that the MoNE mean function M(x)M(x) satisfies

supxKf(x)M(x)<ϵ.\sup_{x \in K} |f(x) - M(x)| < \epsilon.

MoNE mean functions are of the form

M(x)=iπi(x;v)ϕi(x;wi),M(x) = \sum_{i} \pi_i(x; v) \phi_i(x; w_i),

where πi(;v)\pi_i(\cdot; v) are gating functions and ϕi(;wi)\phi_i(\cdot; w_i) are expert functions. The class {M(x)}\{M(x)\} is dense in C(K)C(K), the space of continuous functions on the compact domain KK (Nguyen et al., 2016). This result is analogous to the universal approximation property for fully connected networks, but MoNE achieves approximation by soft-partitioning the input space and delegating localized approximation tasks to neuron-level experts.

The key implications are:

  • MoNE architectures can approximate any continuous function arbitrarily well given sufficient neurons and flexible gating.
  • The modular structure, with each neuron expert focusing on local regimes, leads to potentially improved efficiency and interpretable, locally-tuned approximations.
  • Extensions to multiple-output and conditional density settings also hold, with denseness guarantees for vector-valued functions using Gaussian (or softmax) gating (Nguyen et al., 2017).

2. Motivations and Sparsification Observations

Empirical analyses reveal that in standard MoE models, activated experts contain many neurons with near-zero activation, implying a significant degree of underutilization of the network's capacity. Systematic pruning of expert parameters by ranking their activation magnitudes shows that up to 60%60\% of parameters within the activated subset can be removed with negligible task-performance degradation; substantial performance drops occur only after pruning over 90%90\% (Cheng et al., 7 Oct 2025). Visualization further confirms that most neuron activations remain near zero across a variety of MoE instantiations.

This observation motivates the MoNE methodology:

  • Rather than performing expert selection only at the expert level, perform top-kk selection at the neuron level within each activated expert.
  • This approach can halve the number of activated parameters per MoE layer while retaining full (or superior) predictive accuracy compared to traditional MoE evaluated at equivalent activated parameter budgets.

3. MoNE Architecture and Inference Mechanism

MoNE achieves neuron-granular expert selection within each activated expert by applying a top-kk threshold to per-neuron gating values. For an input xx, the expert output is decomposed as follows: Ei(x)=Wdown,i(SiLU(Wgate,ix)Wup,ix),E_i(x) = W_{\text{down},i}\left(\operatorname{SiLU}(W_{\text{gate},i} x) \odot W_{\text{up},i} x\right), Let G=SiLU(Wgate,ix)G = \operatorname{SiLU}(W_{\text{gate},i} x), H=Wup,ixH = W_{\text{up},i} x. This can be rewritten as a sum over neurons: Ei(x)=kG[k](Wdown,i[:,k]Wup,i[k,:]x).E_i(x) = \sum_{k} G[k] (W_{\text{down},i}[:,k] W_{\text{up},i}[k,:] x). The set IN=argtopK(G)\mathcal{I}_N = \operatorname{argtopK}(|G|) identifies the kk highest-magnitude neuron activations within each expert. Only this subset is retained in the forward computation, reducing both the volume of computation and the number of active parameters.

Key properties of the MoNE approach:

  • No additional router parameters or inter-expert communication are required for neuron-level selection.
  • The computational overhead of the intra-expert top-kk operation is negligible relative to full expert computation.

4. Parameter Utilization, Efficiency, and Performance

Experiments on a variety of model and task configurations reveal several consistent findings (Cheng et al., 7 Oct 2025):

  • MoNE matched or exceeded the task accuracy of traditional MoE models, activating only 50%50\% of the MoE layer’s parameters.
  • At matched activated parameter counts, MoNE consistently outperformed standard MoE (with relative improvements of $1$–2%2\% in several settings).
  • Inference latency and GPU memory consumption are comparable to traditional MoE, because the top‑kk operation for intra-expert selection is lightweight and local.
  • Introducing a neuron-granular load balance loss (NG-LBL) further encourages uniform utilization of neuron experts, mitigating cases where a small group of neuron experts receives a disproportionate fraction of the gating mass.

5. Mathematical Formulations and Implementation

MoNE's formalism proceeds by decomposing an expert's output as a sum over neuron experts and applying a top‑kk operator:

  • For each selected expert, select the kk neurons with the highest G[k]|G[k]|.
  • Only these neurons and the associated blocks in (Wup,Wdown)(W_{\text{up}}, W_{\text{down}}) are used to compute the output for the given input.
  • No additional routing network is needed for this fine-grained selection; the selection is performed using per-sample activations.

An additional neuron-level load balance loss is introduced: LNG-LBL,i=αNGdexpertkfikPikL_{\text{NG-LBL},i} = \alpha_{\text{NG}} \cdot d_{\text{expert}} \sum_{k} f_{ik} P_{ik} where fikf_{ik} is the fraction of tokens routed to neuron kk in expert ii and PikP_{ik} is the average gating value. This loss promotes balanced neuron utilization throughout training.

6. Practical Implications and Future Directions

MoNE extends the efficiency and scalability advantages of MoE to a finer granularity of computation. By selecting only neuron-level subexperts with high activation, MoNE

  • Substantially increases the effective utilization of activated parameters per token.
  • Achieves considerable reduction in the number of active parameters and computation without degrading performance.
  • Suggests strategies for structured model pruning, compression, and adaptive computation at inference, applicable in resource-constrained environments.

Theoretical results motivate further research into adaptive algorithms for expert/neuron selection and exploration of how such granular mixtures can be extended to more complex or hierarchical structures (e.g., mixtures of subnets or layers) (Nguyen et al., 2016). Open questions include the optimal criteria for intra-expert neuron selection beyond top‑kk magnitude and the interaction between neuron-granular gating and global model calibration.

7. Context Within the Landscape of MoE Architectures

MoNE is situated as a refinement of modern mixture-of-experts models that seeks to address the inefficiencies of classic expert-level selection by leveraging empirically observed sparsity at the neuron level within experts. This is distinct from prior approaches that target expert-level routing, hierarchical mixtures, or parameter sharing, and instead exploits the activation structure within each expert for dynamic neuron-wise gating. The approach is supported by both theoretical density results and rigorous empirical benchmarking, demonstrating its practical viability for large-scale deployment of sparse neural architectures.


In summary, Mixture of Neuron Experts (MoNE) delivers a theoretically grounded, practically validated framework for improving parameter and computational efficiency in MoE-like neural architectures by leveraging neuron-level sparsity and selection. Its principal innovation lies in intra-expert, neuron-granular mixture modeling, enabling greater utilization of neural capacity and robust performance at lower activation budgets (Cheng et al., 7 Oct 2025, Nguyen et al., 2016).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Mixture of Neuron Experts (MoNE).