Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core (2504.14960v2)

Published 21 Apr 2025 in cs.LG and cs.DC

Abstract: Mixture of Experts (MoE) models enhance neural network scalability by dynamically selecting relevant experts per input token, enabling larger model sizes while maintaining manageable computation costs. However, efficient training of large-scale MoE models across thousands of GPUs presents significant challenges due to limitations in existing parallelism strategies. We introduce an end-to-end training framework for large-scale MoE models that utilizes five-dimensional hybrid parallelism: Tensor Parallelism, Expert Parallelism, Context Parallelism, Data Parallelism, and Pipeline Parallelism. Central to our approach is MoE Parallel Folding, a novel strategy that decouples the parallelization of attention and MoE layers in Transformer models, allowing each layer type to adopt optimal parallel configurations. Additionally, we develop a flexible token-level dispatcher that supports both token-dropping and token-dropless MoE training across all five dimensions of parallelism. This dispatcher accommodates dynamic tensor shapes and coordinates different parallelism schemes for Attention and MoE layers, facilitating complex parallelism implementations. Our experiments demonstrate significant improvements in training efficiency and scalability. We achieve up to 49.3% Model Flops Utilization (MFU) for the Mixtral 8x22B model and 39.0% MFU for the Qwen2-57B-A14B model on H100 GPUs, outperforming existing methods. The framework scales efficiently up to 1,024 GPUs and maintains high performance with sequence lengths up to 128K tokens, validating its effectiveness for large-scale MoE model training. The code is available in Megatron-Core.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Dennis Liu (2 papers)
  2. Zijie Yan (10 papers)
  3. Xin Yao (139 papers)
  4. Tong Liu (316 papers)
  5. Vijay Korthikanti (7 papers)
  6. Evan Wu (1 paper)
  7. Shiqing Fan (10 papers)
  8. Gao Deng (1 paper)
  9. Hongxiao Bai (4 papers)
  10. Ashwath Aithal (12 papers)
  11. Michael Andersch (5 papers)
  12. Mohammad Shoeybi (60 papers)
  13. Jiajie Yao (3 papers)
  14. Chandler Zhou (2 papers)
  15. David Wu (25 papers)
  16. Xipeng Li (2 papers)
  17. June Yang (3 papers)
  18. Jianbin Chang (2 papers)