Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pangu Ultra MoE: How to Train Your Big MoE on Ascend NPUs (2505.04519v1)

Published 7 May 2025 in cs.CL

Abstract: Sparse LLMs with Mixture of Experts (MoE) and close to a trillion parameters are dominating the realm of most capable LLMs. However, the massive model scale poses significant challenges for the underlying software and hardware systems. In this paper, we aim to uncover a recipe to harness such scale on Ascend NPUs. The key goals are better usage of the computing resources under the dynamic sparse model structures and materializing the expected performance gain on the actual hardware. To select model configurations suitable for Ascend NPUs without repeatedly running the expensive experiments, we leverage simulation to compare the trade-off of various model hyperparameters. This study led to Pangu Ultra MoE, a sparse LLM with 718 billion parameters, and we conducted experiments on the model to verify the simulation results. On the system side, we dig into Expert Parallelism to optimize the communication between NPU devices to reduce the synchronization overhead. We also optimize the memory efficiency within the devices to further reduce the parameter and activation management overhead. In the end, we achieve an MFU of 30.0% when training Pangu Ultra MoE, with performance comparable to that of DeepSeek R1, on 6K Ascend NPUs, and demonstrate that the Ascend system is capable of harnessing all the training stages of the state-of-the-art LLMs. Extensive experiments indicate that our recipe can lead to efficient training of large-scale sparse LLMs with MoE. We also study the behaviors of such models for future reference.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (74)
  1. Yehui Tang (63 papers)
  2. Yichun Yin (27 papers)
  3. Yaoyuan Wang (18 papers)
  4. Hang Zhou (166 papers)
  5. Yu Pan (154 papers)
  6. Wei Guo (221 papers)
  7. Ziyang Zhang (69 papers)
  8. Miao Rang (3 papers)
  9. Fangcheng Liu (7 papers)
  10. Naifu Zhang (8 papers)
  11. Binghan Li (5 papers)
  12. Yonghan Dong (5 papers)
  13. Xiaojun Meng (23 papers)
  14. Yasheng Wang (91 papers)
  15. Dong Li (429 papers)
  16. Yin Li (150 papers)
  17. Dandan Tu (25 papers)
  18. Can Chen (64 papers)
  19. Youliang Yan (31 papers)
  20. Fisher Yu (104 papers)

Summary

Pangu Ultra MoE: How to Train Large Sparse Mixture of Experts on Ascend NPUs

The paper "Pangu Ultra MoE: How to Train Your Big MoE on Ascend NPUs" by the Huawei Pangu Team presents a comprehensive methodology for training large-scale sparse LLMs, specifically a Mixture of Experts (MoE) model on Ascend Neural Processing Units (NPUs). With increasing model sizes approaching a trillion parameters, practical challenges arise in optimizing software and hardware to fully utilize these scale networks. This paper explores strategies to harness such model scales efficiently on Ascend NPUs, aiming for optimal utilization of computing resources and actualizing the theoretical performance gain.

Key Objectives and Achievements

The authors address the challenges by designing the architecture and system configurations meticulously. Here are the primary objectives and achievements outlined in the paper:

  • Model Configuration and Simulation: The paper introduces a simulation-driven method for selecting optimal model configurations that align with Ascend NPUs’ capabilities. This approach allows quick exploration of the trade-offs among various model hyperparameters without the overhead of costly real-world experiments. The resulting model configuration, Pangu Ultra MoE, encompasses 718 billion parameters.
  • Expert Parallelism and Communication Optimization: The research explores Expert Parallelism to address synchronization overhead during training across 6,000 NPUs. By implementing hierarchical communication strategies, the system effectively segregates intra-node and inter-node traffic, optimizing bandwidth utilization.
  • Memory Efficiency Optimizations: Refined strategies for memory efficiency help mitigate the high memory utilization and parameter management overhead typical in large MoE models. Techniques such as fine-grained recomputation and tensor swapping are applied, keeping memory usage within manageable bounds during the training process.
  • Load Balancing and Token Behavior Analysis: Load balancing is significantly optimized through adaptive strategies that predict expert load and adaptively place tokens, avoiding severe bottlenecks. Additionally, a meticulous analysis of the model behavior during training offers insights that improve future MoE implementations.

Numerical Results and Comparative Evaluation

The paper reports significant improvements, including an achieved Model Flops Utilization (MFU) of 30.0%, which is notably higher than the baseline. Furthermore, Tokens Per Second (TPS) reached 1.46 million, a marked improvement over previously reported baselines. The research further highlights the comparable performance of Pangu Ultra MoE to existing state-of-the-art models like DeepSeek R1, particularly in medical benchmarks, exemplifying strong domain-specific performance.

Implications and Future Work

The implications of this research are manifold. Practically, the methodologies proposed support scalable and efficient training of expansive MoE models on NPUs, effectively bridging the gap between theoretical potential and practical implementation. Theoretically, the analyses accentuate the importance of expert specialization and load balancing in training sparse models, fostering more nuanced understandings of MoE architectures. As AI development advances, the paper lays foundational guidance for training even more complex multi-trillion parameter models with increased efficiency and resource optimization.

Conclusion

In summary, this paper presents a fundamentally effective approach to training large-scale sparse models on Ascend hardware and elaborates on strategies to optimize both architecture selection and system processes. Through various system optimizations and rigorous simulation methodologies, this work exemplifies a robust framework potentially adaptable for similar high-caliber AI model training, paving the way for future advancements in AI capabilities leveraging hardware-specific configurations.

Youtube Logo Streamline Icon: https://streamlinehq.com