Papers
Topics
Authors
Recent
2000 character limit reached

Block-Structured Limited-Gain & High-Gain Observers

Updated 4 January 2026
  • The paper introduces block-structured observer architectures that decouple noise amplification from fast state estimation using cascade and limited-gain designs.
  • It formulates mathematical models with observability canonical forms and decomposes the observer into blocks for improved dimensionality and gain management.
  • Stability proofs using ISS and Lyapunov techniques, along with practical tuning guidelines, highlight enhanced robustness in noisy, nonlinear SISO systems.

Block-Structured Limited-Gain High-Gain Observers (BSLG-HGO) are observer architectures that address the fundamental trade-off in high-gain state estimation: achieving rapid convergence and disturbance rejection without incurring excessive amplification of high-frequency measurement noise. These architectures exploit block decomposition and cascade principles, optimizing both dimensionality and gain structure. The innovations discussed below are drawn from two primary research lines: the cascade ESO for active disturbance rejection control under measurement noise (Łakomy et al., 2020), and the limited-gain observer of dimension $2n-2$ with gain powers limited to $2$ rather than nn (Astolfi et al., 2015).

1. Mathematical Formulation and Canonical Structure

The underlying systems are single-input single-output (SISO) nonlinear plants cast into observability canonical form. For the cascade ESO (Łakomy et al., 2020), the extended-state approach incorporates lumped disturbances into an augmented state vector:

x˙(t)=Anx(t)+bn[f(x,t)+g(x,t)u(t)+d(t)],y(t)=cnx(t)+w(t)\dot{x}(t) = A_n x(t) + b_n [f(x, t) + g(x, t) u(t) + d^*(t)], \qquad y(t) = c_n^\top x(t) + w(t)

with total disturbance d(x,t)=f(x,t)+d(t)+[g(x,t)h^]u(t)d(x, t) = f(x, t) + d^*(t) + [g(x, t) - \hat h] u(t). The extended state z(t)Rn+1z(t) \in \mathbb{R}^{n+1} models both x(t)x(t) and d(x,t)d(x, t). In the limited-gain observer paradigm (Astolfi et al., 2015), the plant is

x˙1=x2,,x˙n1=xn,x˙n=f(x)+g(x)u,y=x1\dot{x}_1 = x_2, \quad \ldots, \quad \dot{x}_{n-1} = x_n, \quad \dot{x}_n = f(x) + g(x) u, \quad y = x_1

The observer structure is then decomposed into (n1)(n-1) blocks of size two, resulting in a $2n-2$ dimensional observer.

2. Block-Structured and Cascade Architectures

In the cascade ESO framework, the observer is organized into pp sequential blocks ("cascade stages"). The first stage receives the raw output y(t)y(t) contaminated by noise w(t)w(t) and operates at a relatively low bandwidth ω1\omega_1. Each subsequent block i>1i > 1 receives as its "measurement" only the filtered output of the previous block, cn+1z^(i1)(t)c_{n+1}^\top \hat z^{(i-1)}(t), which has reduced noise content. Gains in block ii are chosen by pole placement with increasing bandwidths ωi=νωi1\omega_i = \nu \omega_{i-1}, ν>1\nu > 1, enabling rapid tracking of the residual disturbance while suppressing direct noise amplification.

Limited-gain high-gain observers (Astolfi et al., 2015) employ an inter-block structure, where each block ξiR2\xi_i \in \mathbb{R}^2 is coupled to adjacent blocks through specific matrix connections and innovation signals. Gains are applied as diag(,2)\text{diag}(\ell, \ell^2), confining the maximal power of the high-gain parameter \ell to $2$, rather than escalating to nn as in traditional high-gain designs.

3. Stability, Convergence, and Error Dynamics

Both designs employ input-to-state stability (ISS) and Lyapunov techniques for convergence proofs.

Cascade ESO

Let ei(t)=z(t)z^(i)(t)bn+1bn+1j=1i1z^(j)(t)e_i(t) = z(t) - \hat z^{(i)}(t) - b_{n+1} b_{n+1}^\top \sum_{j=1}^{i-1} \hat z^{(j)}(t). Under boundedness and Lipschitz hypotheses, ISS is achieved:

lim suptei(t)j=1i1Ci,jωjn+2jsuptej(t)+Ciωisuptd˙(t)+j=1iDi,jωjn+1(j1)suptw(t)\limsup_{t \to \infty} \|e_i(t)\| \leq \sum_{j=1}^{i-1} \frac{C_{i, j}}{\omega_j^{n+2-j}} \sup_t \|e_j(t)\| + \frac{C_i'}{\omega_i} \sup_t |\dot d(t)| + \sum_{j=1}^i \frac{D_{i, j}}{\omega_j^{n+1-(j-1)}} \sup_t |w(t)|

Exponential convergence of ei0e_i \to 0 at rate at least ωi\omega_i is guaranteed in absence of noise and disturbance derivatives.

Limited-Gain High-Gain Observer

Using scaled errors and block-tridiagonal system matrices, state error is bounded as

x^(t)x(t)max{c1n1ec2tx^(0)x(0),c3Dn()1d,c4n1ν}\|\hat{x}(t) - x(t)\| \leq \max\Big\{ c_1 \ell^{n-1} e^{-c_2 \ell t}\|\hat{x}(0) - x(0)\|,\, c_3 \|D_n(\ell)^{-1} d \|_\infty,\, c_4 \ell^{n-1} \|\nu \|_\infty \Big\}

with Dn()=diag(n1,,,1)D_n(\ell) = \text{diag}(\ell^{n-1}, \ldots, \ell, 1). Only powers up to 2\ell^2 are required in any gain block, ensuring superior robustness to noise.

4. Quantitative Noise Attenuation and Comparison

A salient feature of both architectures is their ability to suppress measurement noise amplification relative to classical high-gain observers.

Observer Type Dimensionality Max Gain Power Noise Amplification Scaling
Classical High-Gain nn n\ell^n n1\sim \ell^{n-1}
Limited-Gain HGO $2n-2$ 2\ell^2 n1\sim \ell^{n-1} (lower prefactor)
Cascade ESO (n+1)p(n+1)\cdot p ω1\omega_1 1/ω1n+1\sim 1/\omega_1^{n+1}

Standard ESO achieves fast estimation but directly amplifies measurement noise based on the largest bandwidth. Cascade architectures decouple noise-attenuation from convergence rate: low ω1\omega_1 shields against noise, while downstream blocks assure rapid estimation via elevated ωi\omega_i.

High-frequency error harmonics in the limited-gain observer decay as ωN(ri1)\omega_N^{-(r_i'-1)} with ri2r_i' \geq 2 for states i2i \geq 2, in contrast to the classical ωN1\omega_N^{-1} scaling (Astolfi et al., 2015).

5. Practical Implementation and Tuning Guidelines

Cascade ESO and limited-gain designs provide specific practical recommendations:

  • Initial Stage Bandwidth (ω1\omega_1): Select ω1\omega_1 minimally necessary to track the slowest disturbance frequencies; typical practice uses $2$–5×5\times the highest disturbance frequency (Łakomy et al., 2020).
  • Inter-Block Bandwidth Ratio (ν\nu): Recommended ν\nu in $2$–$4$ range; higher stages exploit elevated bandwidth to hasten residual error estimation while shielding from raw noise.
  • Number of Stages (pp): Typically, p=2p = 2 or $3$ yields sufficient noise suppression without excessive complexity.
  • Observer Dimension ($2n-2$): For limited-gain designs, the observer expands to $2n-2$ states but delivers improved numerical conditioning and noise robustness.
  • Gain Selection: For blockwise observers, pick gains so the associated block-tridiagonal matrix is Hurwitz; for cascade, pole-placement coefficients guarantee desired bandwidth per block.

6. Theoretical Significance and Future Directions

BSLG-HGO approaches resolve the classical trade-off in high-gain observer theory: simultaneously ensuring rapid convergence and mitigating noise sensitivity. Cascade architectures, by isolating noise amplification to the low-bandwidth front-end and enabling rapid residual tracking downstream, and limited-gain observers restricting internal gain powers, both advance estimation theory for noisy, uncertain nonlinear systems.

Open questions include:

  • Systematic optimization of cascade depth pp and bandwidth chain {ωi}\{\omega_i\}.
  • Extensions to output-feedback ADRC with nonlinear or adaptive blocks, and to infinite-dimensional (PDE) systems (Łakomy et al., 2020).
  • Comprehensive frequency-domain analysis of noise transfer functions beyond ISS proofs.
  • Sampled-data, multi-output, and disturbance-rejection refinements for limited-gain observers (Astolfi et al., 2015).

The impact is pronounced in control engineering applications, where precise state and disturbance estimation in the presence of sensor noise is essential for robust active disturbance rejection and feedback control. Experimental validation across a broader variety of plants remains an active area for performance refinement and selection heuristics.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Block-Structured Limited-Gain High-Gain Observers.