Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the convergence result of the gradient-push algorithm on directed graphs with constant stepsize (2302.08779v2)

Published 17 Feb 2023 in math.OC, cs.DC, and eess.SP

Abstract: Distributed optimization has recieved a lot of interest due to its wide applications in various fields. It consists of multiple agents that connected by a graph and optimize a total cost in a collaborative way. Often in the applications, the graph of the agents is given by a directed graph. The gradient-push algorithm is a fundamental method for distributed optimization for which the agents are connected by a directed graph. Despite of its wide usage in the literatures, its convergence property has not been established well for the important case that the stepsize is constant and the domain is the entire space. This work proves that the gradient-push algorithm with stepsize $\alpha>0$ converges exponentially fast to an $O(\alpha)$-neighborhood of the optimizer if the stepsize $\alpha$ is less than a specific value. For the result, we assume that each cost is smooth and the total cost is strongly convex. Numerical experiments are provided to support the theoretical convergence result. \textcolor{black}{We also present a numerical test showing that the gradient-push algorithm may approach a small neighborhood of the minimizer faster than the Push-DIGing algorithm which is a variant of the gradient-push algorithm involving the communication of the gradient informations of the agents.

Citations (1)

Summary

We haven't generated a summary for this paper yet.