Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! (2011.01697v1)

Published 3 Nov 2020 in math.OC and cs.LG

Abstract: Decentralized optimization methods enable on-device training of machine learning models without a central coordinator. In many scenarios communication between devices is energy demanding and time consuming and forms the bottleneck of the entire system. We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators to the communicated messages. By combining our scheme with a new variance reduction technique that progressively throughout the iterations reduces the adverse effect of the injected quantization noise, we obtain the first scheme that converges linearly on strongly convex decentralized problems while using compressed communication only. We prove that our method can solve the problems without any increase in the number of communications compared to the baseline which does not perform any communication compression while still allowing for a significant compression factor which depends on the conditioning of the problem and the topology of the network. Our key theoretical findings are supported by numerical experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dmitry Kovalev (47 papers)
  2. Anastasia Koloskova (18 papers)
  3. Martin Jaggi (155 papers)
  4. Peter Richtarik (286 papers)
  5. Sebastian U. Stich (66 papers)
Citations (68)

Summary

We haven't generated a summary for this paper yet.