Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Normal Map-based Stochastic Proximal Gradient Methods over Networks (2412.13054v2)

Published 17 Dec 2024 in math.OC

Abstract: Consider $n$ agents connected over a network collaborate to minimize the average of their local cost functions combined with a common nonsmooth function. This paper introduces a unified algorithmic framework for solving such a problem through distributed stochastic proximal gradient methods, leveraging the normal map update scheme. Within this framework, we propose two new algorithms, termed Normal Map-based Distributed Stochastic Gradient Tracking (norM-DSGT) and Normal Map-based Exact Diffusion (norM-ED), to solve the distributed composite optimization problem over a connected network. We demonstrate that both methods can asymptotically achieve comparable convergence rates to the centralized stochastic proximal gradient descent method under a general variance condition on the stochastic gradients. Additionally, the number of iterations required for norM-ED to achieve such a rate (i.e., the transient time) behaves as $\mathcal{O}(n{3}/(1-\lambda)2)$ for minimizing composite objective functions, matching the performance of the non-proximal ED algorithm. Here $1-\lambda$ denotes the spectral gap of the mixing matrix related to the underlying network topology. To our knowledge, such a convergence result is state-of-the-art for the considered composite problem. Under the same condition, norM-DSGT enjoys a transient time of $\mathcal{O}(\max{n3/(1-\lambda)2, n/(1-\lambda)4})$ and behaves more stable than norM-ED under decaying stepsizes for solving the tested problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kun Huang (85 papers)
  2. Shi Pu (109 papers)
  3. Angelia Nedić (67 papers)