Papers
Topics
Authors
Recent
2000 character limit reached

Fenchel Dual Gradient Methods for Distributed Convex Optimization over Time-varying Networks

Published 25 Aug 2017 in math.OC | (1708.07620v2)

Abstract: In the large collection of existing distributed algorithms for convex multi-agent optimization, only a handful of them provide convergence rate guarantees on agent networks with time-varying topologies, which, however, restrict the problem to be unconstrained. Motivated by this, we develop a family of distributed Fenchel dual gradient methods for solving constrained, strongly convex but not necessarily smooth multi-agent optimization problems over time-varying undirected networks. The proposed algorithms are constructed based on the application of weighted gradient methods to the Fenchel dual of the multi-agent optimization problem, and can be implemented in a fully decentralized fashion. We show that the proposed algorithms drive all the agents to both primal and dual optimality asymptotically under a minimal connectivity condition and at sublinear rates under a standard connectivity condition. Finally, the competent convergence performance of the distributed Fenchel dual gradient methods is demonstrated via simulations.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.