Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models (2406.06563v1)

Published 3 Jun 2024 in cs.CL and cs.AI

Abstract: In this technical report, we introduce the training methodologies implemented in the development of Skywork-MoE, a high-performance mixture-of-experts (MoE) LLM with 146 billion parameters and 16 experts. It is initialized from the pre-existing dense checkpoints of our Skywork-13B model. We explore the comparative effectiveness of upcycling versus training from scratch initializations. Our findings suggest that the choice between these two approaches should consider both the performance of the existing dense checkpoints and the MoE training budget. We highlight two innovative techniques: gating logit normalization, which improves expert diversification, and adaptive auxiliary loss coefficients, allowing for layer-specific adjustment of auxiliary loss coefficients. Our experimental results validate the effectiveness of these methods. Leveraging these techniques and insights, we trained our upcycled Skywork-MoE on a condensed subset of our SkyPile corpus. The evaluation results demonstrate that our model delivers strong performance across a wide range of benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Tianwen Wei (20 papers)
  2. Bo Zhu (83 papers)
  3. Liang Zhao (353 papers)
  4. Cheng Cheng (188 papers)
  5. Biye Li (6 papers)
  6. Weiwei Lü (2 papers)
  7. Peng Cheng (229 papers)
  8. Jianhao Zhang (31 papers)
  9. Xiaoyu Zhang (144 papers)
  10. Liang Zeng (31 papers)
  11. Xiaokun Wang (10 papers)
  12. Yutuan Ma (2 papers)
  13. Rui Hu (96 papers)
  14. Shuicheng Yan (275 papers)
  15. Han Fang (61 papers)
  16. Yahui Zhou (18 papers)
Citations (13)