Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conditions for Convergence of Dynamic Regressor Extension and Mixing Parameter Estimator Using LTI Filters (2007.15224v5)

Published 30 Jul 2020 in eess.SY and cs.SY

Abstract: In this note we study the conditions for convergence of recently introduced dynamic regressor extension and mixing (DREM) parameter estimator when the extended regressor is generated using LTI filters. In particular, we are interested in relating these conditions with the ones required for convergence of the classical gradient (or least squares), namely the well-known persistent excitation (PE) requirement on the original regressor vector, $\phi(t) \in \mathbb{R}q$, with $q \in \mathbb{N}$ the number of unknown parameters. Moreover, we study the case when only interval excitation (IE) is available, under which DREM, concurrent and composite learning schemes ensure global convergence, being the convergence for DREM in finite time. Regarding PE we prove that, under some mild technical assumptions, if $\phi(t)$ is PE then the scalar regressor of DREM, $\Delta(t) \in \mathbb{R}$, is also PE, ensuring exponential convergence. Concerning IE we prove that if $\phi(t)$ is IE then $\Delta(t)$ is also IE. All these results are established in the almost sure sense, namely proving that the set of filter parameters for which the claims do not hold is of zero measure. The main technical tool used in our proof is inspired by a study of Luenberger observers for nonautonomous nonlinear systems recently reported in the literature.

Citations (31)

Summary

We haven't generated a summary for this paper yet.