Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Achieving Linear Convergence in Distributed Asynchronous Multi-agent Optimization (1803.10359v4)

Published 28 Mar 2018 in math.OC and cs.DC

Abstract: This papers studies multi-agent (convex and \emph{nonconvex}) optimization over static digraphs. We propose a general distributed \emph{asynchronous} algorithmic framework whereby i) agents can update their local variables as well as communicate with their neighbors at any time, without any form of coordination; and ii) they can perform their local computations using (possibly) delayed, out-of-sync information from the other agents. Delays need not be known to the agent or obey any specific profile, and can also be time-varying (but bounded). The algorithm builds on a tracking mechanism that is robust against asynchrony (in the above sense), whose goal is to estimate locally the average of agents' gradients. When applied to strongly convex functions, we prove that it converges at an R-linear (geometric) rate as long as the step-size is {sufficiently small}. A sublinear convergence rate is proved, when nonconvex problems and/or diminishing, {\it uncoordinated} step-sizes are considered. To the best of our knowledge, this is the first distributed algorithm with provable geometric convergence rate in such a general asynchronous setting. Preliminary numerical results demonstrate the efficacy of the proposed algorithm and validate our theoretical findings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ye Tian (191 papers)
  2. Ying Sun (155 papers)
  3. Gesualdo Scutari (62 papers)
Citations (71)

Summary

We haven't generated a summary for this paper yet.