Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the convergence of decentralized gradient descent with diminishing stepsize, revisited (2203.09079v2)

Published 17 Mar 2022 in math.OC, cs.SY, and eess.SY

Abstract: Distributed optimization has received a lot of interest in recent years due to its wide applications in various fields. In this work, we revisit the convergence property of the decentralized gradient descent [A. Nedi{\'c}-A.Ozdaglar (2009)] on the whole space given by $$ x_i(t+1) = \summ_{j=1}w_{ij}x_j(t) - \alpha(t) \nabla f_i(x_i(t)), $$ where the stepsize is given as $\alpha (t) = \frac{a}{(t+w)p}$ with $0< p\leq 1$. Under the strongly convexity assumption on the total cost function $f$ with local cost functions $f_i$ not necessarily being convex, we show that the sequence converges to the optimizer with rate $O(t{-p})$ when the values of $a>0$ and $w>0$ are suitably chosen.

Citations (6)

Summary

We haven't generated a summary for this paper yet.