Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
44 tokens/sec
GPT-5 Medium
18 tokens/sec
GPT-5 High Premium
18 tokens/sec
GPT-4o
105 tokens/sec
DeepSeek R1 via Azure Premium
83 tokens/sec
GPT OSS 120B via Groq Premium
475 tokens/sec
Kimi K2 via Groq Premium
259 tokens/sec
2000 character limit reached

On Stability and Convergence of Distributed Filters (2102.11250v1)

Published 22 Feb 2021 in eess.SP, cs.SY, and eess.SY

Abstract: Recent years have bore witness to the proliferation of distributed filtering techniques, where a collection of agents communicating over an ad-hoc network aim to collaboratively estimate and track the state of a system. These techniques form the enabling technology of modern multi-agent systems and have gained great importance in the engineering community. Although most distributed filtering techniques come with a set of stability and convergence criteria, the conditions imposed are found to be unnecessarily restrictive. The paradigm of stability and convergence in distributed filtering is revised in this manuscript. Accordingly, a general distributed filter is constructed and its estimation error dynamics is formulated. The conducted analysis demonstrates that conditions for achieving stable filtering operations are the same as those required in the centralized filtering setting. Finally, the concepts are demonstrated in a Kalman filtering framework and validated using simulation examples.

Citations (9)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube