Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
11 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
40 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
37 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Optimal Memory Scheme for Accelerated Consensus Over Multi-Agent Networks (2112.07108v1)

Published 14 Dec 2021 in eess.SY, cs.SY, and math.OC

Abstract: The consensus over multi-agent networks can be accelerated by introducing agent's memory to the control protocol. In this paper, a more general protocol with the node memory and the state deviation memory is designed. We aim to provide the optimal memory scheme to accelerate consensus. The contributions of this paper are three: (i) For the one-tap memory scheme, we demonstrate that the state deviation memory is useless for the optimal convergence. (ii) In the worst case, we prove that it is a vain to add any tap of the state deviation memory, and the one-tap node memory is sufficient to achieve the optimal convergence. (iii) We show that the two-tap state deviation memory is effective on some special networks, such as star networks. Numerical examples are listed to illustrate the validity and correctness of the obtained results.

Citations (14)

Summary

We haven't generated a summary for this paper yet.