On the convergence result of the gradient-push algorithm on directed graphs with constant stepsize (2302.08779v2)
Abstract: Distributed optimization has recieved a lot of interest due to its wide applications in various fields. It consists of multiple agents that connected by a graph and optimize a total cost in a collaborative way. Often in the applications, the graph of the agents is given by a directed graph. The gradient-push algorithm is a fundamental method for distributed optimization for which the agents are connected by a directed graph. Despite of its wide usage in the literatures, its convergence property has not been established well for the important case that the stepsize is constant and the domain is the entire space. This work proves that the gradient-push algorithm with stepsize $\alpha>0$ converges exponentially fast to an $O(\alpha)$-neighborhood of the optimizer if the stepsize $\alpha$ is less than a specific value. For the result, we assume that each cost is smooth and the total cost is strongly convex. Numerical experiments are provided to support the theoretical convergence result. \textcolor{black}{We also present a numerical test showing that the gradient-push algorithm may approach a small neighborhood of the minimizer faster than the Push-DIGing algorithm which is a variant of the gradient-push algorithm involving the communication of the gradient informations of the agents.