Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging (2006.00441v1)

Published 31 May 2020 in cs.DC and cs.LG

Abstract: The state-of-the-art deep learning algorithms rely on distributed training systems to tackle the increasing sizes of models and training data sets. Minibatch stochastic gradient descent (SGD) algorithm requires workers to halt forward/back propagations, to wait for gradients aggregated from all workers, and to receive weight updates before the next batch of tasks. This synchronous execution model exposes the overheads of gradient/weight communication among a large number of workers in a distributed training system. We propose a new SGD algorithm, DaSGD (Local SGD with Delayed Averaging), which parallelizes SGD and forward/back propagations to hide 100% of the communication overhead. By adjusting the gradient update scheme, this algorithm uses hardware resources more efficiently and reduces the reliance on the low-latency and high-throughput inter-connects. The theoretical analysis and the experimental results show its convergence rate O(1/sqrt(K)), the same as SGD. The performance evaluation demonstrates it enables a linear performance scale-up with the cluster size.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qinggang Zhou (2 papers)
  2. Yawen Zhang (23 papers)
  3. Pengcheng Li (60 papers)
  4. Xiaoyong Liu (6 papers)
  5. Jun Yang (357 papers)
  6. Runsheng Wang (49 papers)
  7. Ru Huang (52 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.