Papers
Topics
Authors
Recent
Search
2000 character limit reached

Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks

Published 3 Mar 2021 in math.OC | (2103.02271v1)

Abstract: This note studies the distributed non-convex optimization problem with non-smooth regularization, which has wide applications in decentralized learning, estimation and control. The objective function is the sum of different local objective functions, which consist of differentiable (possibly non-convex) cost functions and non-smooth convex functions. This paper presents a distributed proximal gradient algorithm for the non-smooth non-convex optimization problem over time-varying multi-agent networks. Each agent updates local variable estimate by the multi-step consensus operator and the proximal operator. We prove that the generated local variables achieve consensus and converge to the set of critical points with convergence rate $O(1/T)$. Finally, we verify the efficacy of proposed algorithm by numerical simulations.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (4)

Collections

Sign up for free to add this paper to one or more collections.