Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing (2303.10845v1)

Published 20 Mar 2023 in cs.CL

Abstract: The scaling of LLMs has greatly improved natural language understanding, generation, and reasoning. In this work, we develop a system that trained a trillion-parameter LLM on a cluster of Ascend 910 AI processors and MindSpore framework, and present the LLM with 1.085T parameters named PanGu-{\Sigma}. With parameter inherent from PanGu-{\alpha}, we extend the dense Transformer model to sparse one with Random Routed Experts (RRE), and efficiently train the model over 329B tokens by using Expert Computation and Storage Separation(ECSS). This resulted in a 6.3x increase in training throughput through heterogeneous computing. Our experimental findings show that PanGu-{\Sigma} provides state-of-the-art performance in zero-shot learning of various Chinese NLP downstream tasks. Moreover, it demonstrates strong abilities when fine-tuned in application data of open-domain dialogue, question answering, machine translation and code generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (17)
  1. Xiaozhe Ren (21 papers)
  2. Pingyi Zhou (9 papers)
  3. Xinfan Meng (1 paper)
  4. Xinjing Huang (4 papers)
  5. Yadao Wang (7 papers)
  6. Weichao Wang (15 papers)
  7. Pengfei Li (185 papers)
  8. Xiaoda Zhang (3 papers)
  9. Alexander Podolskiy (8 papers)
  10. Grigory Arshinov (1 paper)
  11. Andrey Bout (8 papers)
  12. Irina Piontkovskaya (24 papers)
  13. Jiansheng Wei (10 papers)
  14. Xin Jiang (242 papers)
  15. Teng Su (5 papers)
  16. Qun Liu (230 papers)
  17. Jun Yao (36 papers)
Citations (51)

Summary

Overview of PanGu-$: Towards Trillion Parameter LLM with Sparse Heterogeneous Computing</h2> <p>The paper &quot;PanGu-$: Towards Trillion Parameter LLM with Sparse Heterogeneous Computing" introduces PanGu-,atrillionparameterLLMleveragingsparseheterogeneouscomputingtechniques.ThisworkbuildsuponthePanGu, a trillion-parameter LLM leveraging sparse heterogeneous computing techniques. This work builds upon the PanGu-\alphamodel,expandingitsdenseTransformerarchitecturetoincorporateRandomRoutedExperts(RRE)forenhancedcomputationalefficiency.Bymanagingtoextendtrainingover329billiontokens,theresearchersachievedasignificantimprovementintrainingthroughput,reportinga6.3foldincreasefacilitatedbyExpertComputationandStorageSeparation(ECSS).</p><h3class=paperheading>ModelArchitecture</h3><p>PanGu model, expanding its dense Transformer architecture to incorporate Random Routed Experts (RRE) for enhanced computational efficiency. By managing to extend training over 329 billion tokens, the researchers achieved a significant improvement in training throughput, reporting a 6.3-fold increase facilitated by Expert Computation and Storage Separation (ECSS).</p> <h3 class='paper-heading'>Model Architecture</h3> <p>PanGu- adopts a sparse model architecture, which incorporates RRE to dynamically engage subsets of model parameters during training. This move effectively leverages expertise from a mixture-of-experts framework to reduce computational load and optimize resource use. The model architecture, when combined with heterogeneous computing, facilitates scalable training processes and offers considerable improvements in terms of throughput without compromising performance.

Dataset and Training Process

The dataset utilized for training PanGu-consistsofamassivecompilationof329billiontokens,carefullyselectedtoencompassawiderangeoflinguisticconstructsnecessaryforrobustlanguagegenerationcapabilities.Thetrainingprocess,throughECSS,separatescomputationandstoragefunctions,thusmitigatingtheresourcedemandstypicallyassociatedwithmassive<ahref="https://www.emergentmind.com/topics/audiotextlargelanguagemodelsllms"title=""rel="nofollow"dataturbo="false"class="assistantlink">LLMs</a>,particularlyintermsofmemoryandprocessingpower.</p><h3class=paperheading>PerformanceandResults</h3><p>EmpiricalevaluationsshowcasethatPanGu consists of a massive compilation of 329 billion tokens, carefully selected to encompass a wide range of linguistic constructs necessary for robust language generation capabilities. The training process, through ECSS, separates computation and storage functions, thus mitigating the resource demands typically associated with massive <a href="https://www.emergentmind.com/topics/audio-text-large-language-models-llms" title="" rel="nofollow" data-turbo="false" class="assistant-link">LLMs</a>, particularly in terms of memory and processing power.</p> <h3 class='paper-heading'>Performance and Results</h3> <p>Empirical evaluations showcase that PanGu- achieves state-of-the-art performance in zero-shot learning across various Chinese NLP tasks, reflecting significant proficiency in natural language understanding and generation. Specifically, the model demonstrates strong capabilities upon fine-tuning across applications such as open-domain dialogue, question answering, machine translation, and code generation.

Implications and Future Directions

The advancement represented by PanGu-$ holds several implications for AI research and practical applications. In the theoretical field, the model&#39;s capacity to utilize sparsity principles and heterogeneous computing could inform future developments in scaling AI systems efficiently. Practically, its proficiency in diverse tasks suggests potential deployments in areas where language understanding and generation are critical, such as customer support, automated translation services, and software development.</p> <p>Future work might explore further optimizations in sparsity strategies, possibly extending application to multi-lingual contexts or more domain-specific tasks. Additionally, refining sparse heterogeneous computing techniques within distributed training environments could yield even greater efficiencies, paving the path for more accessible large-scale model training across different computational infrastructures.</p> <p>In summary, PanGu-$ contributes significantly to the landscape of trillion-parameter models, showcasing effective strategies in scale-up via sparse heterogeneous computational methods and promising far-reaching impacts both in theoretical exploration and practical deployment of LLMs.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com