Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improved Convergence Rate for a Distributed Two-Time-Scale Gradient Method under Random Quantization (2105.14089v1)

Published 28 May 2021 in eess.SY and cs.SY

Abstract: We study the so-called distributed two-time-scale gradient method for solving convex optimization problems over a network of agents when the communication bandwidth between the nodes is limited, and so information that is exchanged between the nodes must be quantized. Our main contribution is to provide a novel analysis, resulting to an improved convergence rate of this method as compared to the existing works. In particular, we show that the method converges at a rate $O(log_2 k/\sqrt k)$ to the optimal solution, when the underlying objective function is strongly convex and smooth. The key technique in our analysis is to consider a Lyapunov function that simultaneously captures the coupling of the consensus and optimality errors generated by the method.

Citations (10)

Summary

We haven't generated a summary for this paper yet.