Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Distributed Synchronous SGD (1702.05800v2)

Published 19 Feb 2017 in cs.DC, cs.AI, and cs.LG

Abstract: Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In contrast, the synchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xinghao Pan (9 papers)
  2. Jianmin Chen (25 papers)
  3. Rajat Monga (12 papers)
  4. Samy Bengio (75 papers)
  5. Rafal Jozefowicz (11 papers)
Citations (749)

Summary

We haven't generated a summary for this paper yet.