Pangu Ultra: Pushing the Limits of Dense Large Language Models on Ascend NPUs (2504.07866v2)
Abstract: We present Pangu Ultra, a LLM with 135 billion parameters and dense Transformer modules trained on Ascend Neural Processing Units (NPUs). Although the field of LLM has been witnessing unprecedented advances in pushing the scale and capability of LLM in recent years, training such a large-scale model still involves significant optimization and system challenges. To stabilize the training process, we propose depth-scaled sandwich normalization, which effectively eliminates loss spikes during the training process of deep models. We pre-train our model on 13.2 trillion diverse and high-quality tokens and further enhance its reasoning capabilities during post-training. To perform such large-scale training efficiently, we utilize 8,192 Ascend NPUs with a series of system optimizations. Evaluations on multiple diverse benchmarks indicate that Pangu Ultra significantly advances the state-of-the-art capabilities of dense LLMs such as Llama 405B and Mistral Large 2, and even achieves competitive results with DeepSeek-R1, whose sparse model structure contains much more parameters. Our exploration demonstrates that Ascend NPUs are capable of efficiently and effectively training dense models with more than 100 billion parameters. Our model and system will be available for our commercial customers.
- Yichun Yin (27 papers)
- Wenyong Huang (12 papers)
- Kaikai Song (3 papers)
- Yehui Tang (63 papers)
- Xueyu Wu (3 papers)
- Wei Guo (221 papers)
- Peng Guo (78 papers)
- Yaoyuan Wang (18 papers)
- Xiaojun Meng (23 papers)
- Yasheng Wang (91 papers)
- Dong Li (429 papers)
- Can Chen (64 papers)
- Dandan Tu (25 papers)
- Yin Li (150 papers)
- Fisher Yu (104 papers)
- Ruiming Tang (171 papers)
- Yunhe Wang (145 papers)
- Baojun Wang (14 papers)
- Bin Wang (750 papers)
- Bo Wang (823 papers)