Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decentralized Deep Learning using Momentum-Accelerated Consensus (2010.11166v2)

Published 21 Oct 2020 in cs.LG, cs.DC, and stat.ML

Abstract: We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset. While there exist several decentralized deep learning approaches, the majority consider a central parameter-server topology for aggregating the model parameters from the agents. However, such a topology may be inapplicable in networked systems such as ad-hoc mobile networks, field robotics, and power network systems where direct communication with the central parameter server may be inefficient. In this context, we propose and analyze a novel decentralized deep learning algorithm where the agents interact over a fixed communication topology (without a central server). Our algorithm is based on the heavy-ball acceleration method used in gradient-based optimization. We propose a novel consensus protocol where each agent shares with its neighbors its model parameters as well as gradient-momentum values during the optimization process. We consider both strongly convex and non-convex objective functions and theoretically analyze our algorithm's performance. We present several empirical comparisons with competing decentralized learning methods to demonstrate the efficacy of our approach under different communication topologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Aditya Balu (51 papers)
  2. Zhanhong Jiang (26 papers)
  3. Sin Yong Tan (8 papers)
  4. Chinmay Hedge (1 paper)
  5. Soumik Sarkar (111 papers)
  6. Young M Lee (2 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.