Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 37 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 105 tok/s
GPT OSS 120B 463 tok/s Pro
Kimi K2 235 tok/s Pro
2000 character limit reached

Fully Asynchronous Distributed Optimization with Linear Convergence in Directed Networks (1901.08215v3)

Published 24 Jan 2019 in cs.DC and math.OC

Abstract: We consider the distributed optimization problem, the goal of which is to minimize the sum of local objective functions over a directed network. Though it has been widely studied recently, most of the existing algorithms are designed for synchronized or randomly activated implementation, which may create deadlocks in practice. In sharp contrast, we propose a \emph{fully} asynchronous push-pull gradient algorithm (APPG) where each node updates without waiting for any other node by using (possibly stale) information from neighbors. Thus, it is both deadlock-free and robust to any bounded communication delay. Moreover, we construct two novel augmented networks to theoretically evaluate its performance from the worst-case point of view and show that if local functions have Lipschitz-continuous gradients and their sum satisfies the Polyak-\L ojasiewicz condition (convexity is not required), each node of APPG converges to the same optimal solution at a linear rate of $\mathcal{O}(\lambdak)$, where $\lambda\in(0,1)$ and the virtual counter $k$ increases by one no matter which node updates. This largely elucidates its linear speedup efficiency and shows its advantage over the synchronous version. Finally, the performance of APPG is numerically validated via a logistic regression problem on the \emph{Covertype} dataset.

Citations (17)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)