Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

M6-T: Exploring Sparse Expert Models and Beyond (2105.15082v5)

Published 31 May 2021 in cs.LG and cs.CL

Abstract: Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost, and thus it has become a trend in model scaling. Still it is a mystery how MoE layers bring quality gains by leveraging the parameters with sparse activation. In this work, we investigate several key factors in sparse expert models. We observe that load imbalance may not be a significant problem affecting model quality, contrary to the perspectives of recent studies, while the number of sparsely activated experts $k$ and expert capacity $C$ in top-$k$ routing can significantly make a difference in this context. Furthermore, we take a step forward to propose a simple method called expert prototyping that splits experts into different prototypes and applies $k$ top-$1$ routing. This strategy improves the model quality but maintains constant computational costs, and our further exploration on extremely large-scale models reflects that it is more effective in training larger models. We push the model scale to over $1$ trillion parameters and implement it on solely $480$ NVIDIA V100-32GB GPUs, in comparison with the recent SOTAs on $2048$ TPU cores. The proposed giant model achieves substantial speedup in convergence over the same-size baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. An Yang (32 papers)
  2. Junyang Lin (99 papers)
  3. Rui Men (21 papers)
  4. Chang Zhou (105 papers)
  5. Le Jiang (13 papers)
  6. Xianyan Jia (11 papers)
  7. Ang Wang (13 papers)
  8. Jie Zhang (846 papers)
  9. Jiamang Wang (12 papers)
  10. Yong Li (628 papers)
  11. Di Zhang (230 papers)
  12. Wei Lin (207 papers)
  13. Lin Qu (10 papers)
  14. Jingren Zhou (198 papers)
  15. Hongxia Yang (130 papers)
Citations (22)