Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training (2104.11981v1)

Published 24 Apr 2021 in cs.LG, cs.DC, and math.OC

Abstract: The scale of deep learning nowadays calls for efficient distributed training algorithms. Decentralized momentum SGD (DmSGD), in which each node averages only with its neighbors, is more communication efficient than vanilla Parallel momentum SGD that incurs global average across all computing nodes. On the other hand, the large-batch training has been demonstrated critical to achieve runtime speedup. This motivates us to investigate how DmSGD performs in the large-batch scenario. In this work, we find the momentum term can amplify the inconsistency bias in DmSGD. Such bias becomes more evident as batch-size grows large and hence results in severe performance degradation. We next propose DecentLaM, a novel decentralized large-batch momentum SGD to remove the momentum-incurred bias. The convergence rate for both non-convex and strongly-convex scenarios is established. Our theoretical results justify the superiority of DecentLaM to DmSGD especially in the large-batch scenario. Experimental results on a variety of computer vision tasks and models demonstrate that DecentLaM promises both efficient and high-quality training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kun Yuan (117 papers)
  2. Yiming Chen (106 papers)
  3. Xinmeng Huang (23 papers)
  4. Yingya Zhang (43 papers)
  5. Pan Pan (24 papers)
  6. Yinghui Xu (48 papers)
  7. Wotao Yin (141 papers)
Citations (58)