Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
11 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
40 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
37 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

AB/Push-Pull Method for Distributed Optimization in Time-Varying Directed Networks (2209.06974v2)

Published 14 Sep 2022 in math.OC

Abstract: In this paper, we study the distributed optimization problem for a system of agents embedded in time-varying directed communication networks. Each agent has its own cost function and agents cooperate to determine the global decision that minimizes the summation of all individual cost functions. We consider the so-called push-pull gradient-based algorithm (termed as AB/Push-Pull) which employs both row- and column-stochastic weights simultaneously to track the optimal decision and the gradient of the global cost while ensuring consensus and optimality. We show that the algorithm converges linearly to the optimal solution over a time-varying directed network for a constant stepsize when the agent's cost function is smooth and strongly convex. The linear convergence of the method has been shown in Saadatniaki et al. (2020), where the multi-step consensus contraction parameters for row- and column-stochastic mixing matrices are not directly related to the underlying graph structure, and the explicit range for the stepsize value is not provided. With respect to Saadatniaki et al. (2020), the novelty of this work is twofold: (1) we establish the one-step consensus contraction for both row- and column-stochastic mixing matrices with the contraction parameters given explicitly in terms of the graph diameter and other graph properties; and (2) we provide explicit upper bounds for the stepsize value in terms of the properties of the cost functions, the mixing matrices, and the graph connectivity structure.

Summary

We haven't generated a summary for this paper yet.