Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StableMoE: Stable Routing Strategy for Mixture of Experts (2204.08396v1)

Published 18 Apr 2022 in cs.LG and cs.CL

Abstract: The Mixture-of-Experts (MoE) technique can scale up the model size of Transformers with an affordable computational overhead. We point out that existing learning-to-route MoE methods suffer from the routing fluctuation issue, i.e., the target expert of the same input may change along with training, but only one expert will be activated for the input during inference. The routing fluctuation tends to harm sample efficiency because the same input updates different experts but only one is finally used. In this paper, we propose StableMoE with two training stages to address the routing fluctuation problem. In the first training stage, we learn a balanced and cohesive routing strategy and distill it into a lightweight router decoupled from the backbone model. In the second training stage, we utilize the distilled router to determine the token-to-expert assignment and freeze it for a stable routing strategy. We validate our method on LLMing and multilingual machine translation. The results show that StableMoE outperforms existing MoE methods in terms of both convergence speed and performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Damai Dai (38 papers)
  2. Li Dong (154 papers)
  3. Shuming Ma (83 papers)
  4. Bo Zheng (205 papers)
  5. Zhifang Sui (89 papers)
  6. Baobao Chang (80 papers)
  7. Furu Wei (291 papers)
Citations (51)