Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization (2005.07041v3)

Published 13 May 2020 in cs.LG, cs.DC, and stat.ML

Abstract: In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network. In SQuARM-SGD, each node performs a fixed number of local SGD steps using Nesterov's momentum and then sends sparsified and quantized updates to its neighbors regulated by a locally computable triggering criterion. We provide convergence guarantees of our algorithm for general (non-convex) and convex smooth objectives, which, to the best of our knowledge, is the first theoretical analysis for compressed decentralized SGD with momentum updates. We show that the convergence rate of SQuARM-SGD matches that of vanilla SGD. We empirically show that including momentum updates in SQuARM-SGD can lead to better test performance than the current state-of-the-art which does not consider momentum updates.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Navjot Singh (16 papers)
  2. Deepesh Data (22 papers)
  3. Jemin George (25 papers)
  4. Suhas Diggavi (102 papers)
Citations (50)

Summary

We haven't generated a summary for this paper yet.