Fenchel Dual Gradient Methods for Distributed Convex Optimization over Time-varying Networks (1708.07620v2)
Abstract: In the large collection of existing distributed algorithms for convex multi-agent optimization, only a handful of them provide convergence rate guarantees on agent networks with time-varying topologies, which, however, restrict the problem to be unconstrained. Motivated by this, we develop a family of distributed Fenchel dual gradient methods for solving constrained, strongly convex but not necessarily smooth multi-agent optimization problems over time-varying undirected networks. The proposed algorithms are constructed based on the application of weighted gradient methods to the Fenchel dual of the multi-agent optimization problem, and can be implemented in a fully decentralized fashion. We show that the proposed algorithms drive all the agents to both primal and dual optimality asymptotically under a minimal connectivity condition and at sublinear rates under a standard connectivity condition. Finally, the competent convergence performance of the distributed Fenchel dual gradient methods is demonstrated via simulations.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.