2000 character limit reached
On the convergence of decentralized gradient descent with diminishing stepsize, revisited (2203.09079v2)
Published 17 Mar 2022 in math.OC, cs.SY, and eess.SY
Abstract: Distributed optimization has received a lot of interest in recent years due to its wide applications in various fields. In this work, we revisit the convergence property of the decentralized gradient descent [A. Nedi{\'c}-A.Ozdaglar (2009)] on the whole space given by $$ x_i(t+1) = \summ_{j=1}w_{ij}x_j(t) - \alpha(t) \nabla f_i(x_i(t)), $$ where the stepsize is given as $\alpha (t) = \frac{a}{(t+w)p}$ with $0< p\leq 1$. Under the strongly convexity assumption on the total cost function $f$ with local cost functions $f_i$ not necessarily being convex, we show that the sequence converges to the optimizer with rate $O(t{-p})$ when the values of $a>0$ and $w>0$ are suitably chosen.