Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training (1907.13257v1)

Published 30 Jul 2019 in cs.LG, cs.AI, cs.DC, and stat.ML

Abstract: Deploying deep learning (DL) models across multiple compute devices to train large and complex models continues to grow in importance because of the demand for faster and more frequent training. Data parallelism (DP) is the most widely used parallelization strategy, but as the number of devices in data parallel training grows, so does the communication overhead between devices. Additionally, a larger aggregate batch size per step leads to statistical efficiency loss, i.e., a larger number of epochs are required to converge to a desired accuracy. These factors affect overall training time and beyond a certain number of devices, the speedup from leveraging DP begins to scale poorly. In addition to DP, each training step can be accelerated by exploiting model parallelism (MP). This work explores hybrid parallelization, where each data parallel worker is comprised of more than one device, across which the model dataflow graph (DFG) is split using MP. We show that at scale, hybrid training will be more effective at minimizing end-to-end training time than exploiting DP alone. We project that for Inception-V3, GNMT, and BigLSTM, the hybrid strategy provides an end-to-end training speedup of at least 26.5%, 8%, and 22% respectively compared to what DP alone can achieve at scale.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Saptadeep Pal (3 papers)
  2. Eiman Ebrahimi (5 papers)
  3. Arslan Zulfiqar (3 papers)
  4. Yaosheng Fu (4 papers)
  5. Victor Zhang (4 papers)
  6. Szymon Migacz (8 papers)
  7. David Nellans (4 papers)
  8. Puneet Gupta (20 papers)
Citations (49)

Summary

We haven't generated a summary for this paper yet.