Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Maximizing Parallelism in Distributed Training for Huge Neural Networks (2105.14450v1)

Published 30 May 2021 in cs.DC, cs.LG, and cs.PF

Abstract: The recent Natural Language Processing techniques have been refreshing the state-of-the-art performance at an incredible speed. Training huge LLMs is therefore an imperative demand in both industry and academy. However, huge LLMs impose challenges to both hardware and software. Graphical processing units (GPUs) are iterated frequently to meet the exploding demand, and a variety of ASICs like TPUs are spawned. However, there is still a tension between the fast growth of the extremely huge models and the fact that Moore's law is approaching the end. To this end, many model parallelism techniques are proposed to distribute the model parameters to multiple devices, so as to alleviate the tension on both memory and computation. Our work is the first to introduce a 3-dimensional model parallelism for expediting huge LLMs. By reaching a perfect load balance, our approach presents smaller memory and communication cost than existing state-of-the-art 1-D and 2-D model parallelism. Our experiments on 64 TACC's V100 GPUs show that our 3-D parallelism outperforms the 1-D and 2-D parallelism with 2.32x and 1.57x speedup, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zhengda Bian (5 papers)
  2. Qifan Xu (6 papers)
  3. Boxiang Wang (17 papers)
  4. Yang You (173 papers)
Citations (40)