Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks (2103.02271v1)

Published 3 Mar 2021 in math.OC

Abstract: This note studies the distributed non-convex optimization problem with non-smooth regularization, which has wide applications in decentralized learning, estimation and control. The objective function is the sum of different local objective functions, which consist of differentiable (possibly non-convex) cost functions and non-smooth convex functions. This paper presents a distributed proximal gradient algorithm for the non-smooth non-convex optimization problem over time-varying multi-agent networks. Each agent updates local variable estimate by the multi-step consensus operator and the proximal operator. We prove that the generated local variables achieve consensus and converge to the set of critical points with convergence rate $O(1/T)$. Finally, we verify the efficacy of proposed algorithm by numerical simulations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xia Jiang (18 papers)
  2. Xianlin Zeng (25 papers)
  3. Jian Sun (416 papers)
  4. Jie Chen (602 papers)

Summary

We haven't generated a summary for this paper yet.