Conditions for Convergence of Dynamic Regressor Extension and Mixing Parameter Estimator Using LTI Filters

30 Jul 2020  ·  Bowen Yi, Romeo Ortega ·

In this note we study the conditions for convergence of recently introduced dynamic regressor extension and mixing (DREM) parameter estimator when the extended regressor is generated using LTI filters. In particular, we are interested in relating these conditions with the ones required for convergence of the classical gradient (or least squares), namely the well-known persistent excitation (PE) requirement on the original regressor vector, $\phi(t) \in \mathbb{R}^q$, with $q \in \mathbb{N}$ the number of unknown parameters. Moreover, we study the case when only interval excitation (IE) is available, under which DREM, concurrent and composite learning schemes ensure global convergence, being the convergence for DREM in finite time. Regarding PE we prove that, under some mild technical assumptions, if $\phi(t)$ is PE then the scalar regressor of DREM, $\Delta(t) \in \mathbb{R}$, is also PE, ensuring exponential convergence. Concerning IE we prove that if $\phi(t)$ is IE then $\Delta(t)$ is also IE. All these results are established in the almost sure sense, namely proving that the set of filter parameters for which the claims do not hold is of zero measure. The main technical tool used in our proof is inspired by a study of Luenberger observers for nonautonomous nonlinear systems recently reported in the literature.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here