Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters (1810.01053v3)

Published 2 Oct 2018 in math.OC

Abstract: In this paper, we study the communication and (sub)gradient computation costs in distributed optimization and give a sharp complexity analysis for the proposed distributed accelerated gradient methods. We present two algorithms based on the framework of the accelerated penalty method with increasing penalty parameters. Our first algorithm is for smooth distributed optimization and it obtains the near optimal $O\left(\sqrt{\frac{L}{\epsilon(1-\sigma_2(W))}}\log\frac{1}{\epsilon}\right)$ communication complexity and the optimal $O\left(\sqrt{\frac{L}{\epsilon}}\right)$ gradient computation complexity for $L$-smooth convex problems, where $\sigma_2(W)$ denotes the second largest singular value of the weight matrix $W$ associated to the network and $\epsilon$ is the target accuracy. When the problem is $\mu$-strongly convex and $L$-smooth, our algorithm has the near optimal $O\left(\sqrt{\frac{L}{\mu(1-\sigma_2(W))}}\log2\frac{1}{\epsilon}\right)$ complexity for communications and the optimal $O\left(\sqrt{\frac{L}{\mu}}\log\frac{1}{\epsilon}\right)$ complexity for gradient computations. Our communication complexities are only worse by a factor of $\left(\log\frac{1}{\epsilon}\right)$ than the lower bounds for the smooth distributed optimization. %As far as we know, our method is the first to achieve both communication and gradient computation lower bounds up to an extra logarithm factor for smooth distributed optimization. Our second algorithm is designed for non-smooth distributed optimization and it achieves both the optimal $O\left(\frac{1}{\epsilon\sqrt{1-\sigma_2(W)}}\right)$ communication complexity and $O\left(\frac{1}{\epsilon2}\right)$ subgradient computation complexity, which match the communication and subgradient computation complexity lower bounds for non-smooth distributed optimization.

Summary

We haven't generated a summary for this paper yet.