Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Routers in Vision Mixture of Experts: An Empirical Study (2401.15969v2)

Published 29 Jan 2024 in cs.CV, cs.AI, and cs.LG
Routers in Vision Mixture of Experts: An Empirical Study

Abstract: Mixture-of-Experts (MoE) models are a promising way to scale up model capacity without significantly increasing computational cost. A key component of MoEs is the router, which decides which subset of parameters (experts) process which feature embeddings (tokens). In this paper, we present a comprehensive study of routers in MoEs for computer vision tasks. We introduce a unified MoE formulation that subsumes different MoEs with two parametric routing tensors. This formulation covers both sparse MoE, which uses a binary or hard assignment between experts and tokens, and soft MoE, which uses a soft assignment between experts and weighted combinations of tokens. Routers for sparse MoEs can be further grouped into two variants: Token Choice, which matches experts to each token, and Expert Choice, which matches tokens to each expert. We conduct head-to-head experiments with 6 different routers, including existing routers from prior work and new ones we introduce. We show that (i) many routers originally developed for LLMing can be adapted to perform strongly in vision tasks, (ii) in sparse MoE, Expert Choice routers generally outperform Token Choice routers, and (iii) soft MoEs generally outperform sparse MoEs with a fixed compute budget. These results provide new insights regarding the crucial role of routers in vision MoE models.

Introduction

Mixture-of-Experts (MoE) models represent an important direction in scaling neural network capacity efficiently. Primarily, these models incorporate sparsity into deep learning by routing inputs only through a subset of available experts—sub-networks specializing in different parts of the input space. This paper dissects the router mechanisms responsible for this dynamic allocation within the context of computer vision tasks, evaluating their efficacy in establishing a robust vision MoE system.

Unified MoE Formulation

The researchers present a novel unified formulation for comparing and implementing various MoE layers. They identify two classes: sparse and soft MoE. Sparse MoEs make a binary decision about whether a particular expert handles an input token, while soft MoEs allow for a softer, distributed handling by blending input tokens across different experts.

The authors strategically analyze two sub-types of sparse MoEs: Token Choice and Expert Choice. In Token Choice, each token is matched to one or more experts, whereas Expert Choice inverts this relationship, allowing experts to select the tokens they process. The authors argue that Expert Choice generally performs better due to consistent expert utilization.

Parametric Evaluation of Routers

Within this comparative framework, the paper evaluates six routers, including those previously used for natural language processing tasks, and custom-developed ones. Specifically, this investigation includes Token Choice and Expert Choice routers informed by Softmax and Sinkhorn algorithms, as well as the novel Sparsity-constrained Expert Choice router.

Remarkably, the authors posit that while the routing strategy significantly impacts the performance of sparse MoEs, the parameterization approach of token-to-expert affinity matrices is of secondary concern. Contrarily, soft MoE, influenced by the SoftMoE router, is shown to be superior under uniform computational budgets.

Empirical Insights

Extensive empirical evaluations underline the insights. Routers initially engineered for LLMs show strong performance when adapted to vision tasks, corroborating the architecture-agnostic nature of MoEs. Additionally, the soft MoE model outshines its sparse counterparts across various benchmarks, solidifying it as an efficient and potent approach to scalable vision tasks.

The performance across routers is assessed through large-scale pre-training and fine-tuning on the JFT-300M dataset, including ImageNet few-shot transfer tasks. Notably, Expert Choice routers, which enable each expert to independently select tokens for processing, consistently excel over Token Choice routers. Moreover, it is evident that soft MoE models, notwithstanding their distinct operating mechanism, attain the highest performance metrics, arguing for their future-focused relevance in the field.

Concluding Thoughts

An empirically robust paper concludes that routers native to LLMing tasks have pivotally transcended into vision with efficacy. Among sparse MoE models, those deploying Expert Choice routing strategies are particularly effective. The success of soft MoE models confirms the potential of alternative routing strategies in enhancing model capacity without incurring inordinate computational costs.

This exploration solidifies the significance of routers in vision MoE models and sets a foundation for future investigations. The advent of soft MoEs, especially, presents a transformative vector to advance MoE methodologies beyond the conventional paradigms of network sparsity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tianlin Liu (24 papers)
  2. Mathieu Blondel (43 papers)
  3. Carlos Riquelme (26 papers)
  4. Joan Puigcerver (20 papers)
Citations (3)