Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Study on Accelerating Average Consensus Algorithms Using Delayed Feedback (1912.04442v1)

Published 10 Dec 2019 in cs.MA and math.OC

Abstract: In this paper, we study accelerating a Laplacian-based dynamic average consensus algorithm by splitting the conventional delay-free disagreement feedback into weighted summation of a current and an outdated term. We determine for what weighted sum there exists a range of time delay that results in the higher rate of convergence for the algorithm. For such weights, using the Lambert W function, we obtain the rate increasing range of the time delay, the maximum reachable rate and comment on the value of the corresponding maximizer delay. We also study the effect of use of outdated feedback on the control effort of the agents and show that only for some specific affine combination of the immediate and outdated feedback the control effort of the agents does not go beyond that of the delay-free algorithm. Additionally, we demonstrate that using outdated feedback does not increase the steady state tracking error of the average consensus algorithm. Lastly, we determine the optimum combination of the current and the outdated feedback weights to achieve the maximum increase in the rate of convergence without increasing the control effort of the agents. We demonstrate our results through a numerical example.

Citations (9)

Summary

We haven't generated a summary for this paper yet.