Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BaPipe: Exploration of Balanced Pipeline Parallelism for DNN Training (2012.12544v2)

Published 23 Dec 2020 in cs.DC and cs.AI

Abstract: The size of deep neural networks (DNNs) grows rapidly as the complexity of the machine learning algorithm increases. To satisfy the requirement of computation and memory of DNN training, distributed deep learning based on model parallelism has been widely recognized. We propose a new pipeline parallelism training framework, BaPipe, which can automatically explore pipeline parallelism training methods and balanced partition strategies for DNN distributed training. In BaPipe, each accelerator calculates the forward propagation and backward propagation of different parts of networks to implement the intra-batch pipeline parallelism strategy. BaPipe uses a new load balancing automatic exploration strategy that considers the parameters of DNN models and the computation, memory, and communication resources of accelerator clusters. We have trained different DNNs such as VGG-16, ResNet-50, and GNMT on GPU clusters and simulated the performance of different FPGA clusters. Compared with state-of-the-art data parallelism and pipeline parallelism frameworks, BaPipe provides up to 3.2x speedup and 4x memory reduction in various platforms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Letian Zhao (3 papers)
  2. Rui Xu (199 papers)
  3. Tianqi Wang (43 papers)
  4. Teng Tian (1 paper)
  5. Xiaotian Wang (38 papers)
  6. Wei Wu (482 papers)
  7. Xi Jin (6 papers)
  8. Chio-In Ieong (3 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.