Papers
Topics
Authors
Recent
2000 character limit reached

Sliding-Mode Observer (SMO)

Updated 31 December 2025
  • Sliding-mode observer (SMO) is a state estimation scheme using discontinuous injection terms to achieve finite-time convergence and robustness against unknown inputs and model uncertainties.
  • It integrates a nominal observer with high-gain, non-smooth corrections, enforcing system errors to slide onto a designed manifold for precise state and input recovery.
  • SMOs are applicable to time-varying, nonlinear, and fractional-order systems, offering BIBS stability and modular tuning of gains and differentiator orders for practical implementations.

A sliding-mode observer (SMO) is a state estimation scheme leveraging discontinuous injection terms to achieve finite-time, robust convergence of state and, where applicable, unknown input (disturbance/fault) estimates. SMOs are designed for dynamic systems subject to model uncertainty, unknown exogenous inputs, or bounded disturbances. Their distinguishing feature is the use of high-gain discontinuous (or boundary-layer) correction terms that enforce the system error trajectories onto a user-defined sliding manifold, achieving invariance with respect to matched uncertainties and rapid convergence rates. SMO theory covers time-varying, time-invariant, nonlinear, and even fractional-order systems, and encompasses both first-order and higher-order (e.g., super-twisting) constructions to address chattering and convergence precision.

1. Core Structure of Sliding-Mode Observers

The canonical setup for a sliding-mode observer considers a dynamic system with measured outputs and possibly unknown inputs: x˙(t)=A(t)x(t)+F(t)u(t)+D(t)w(t), y(t)=C(t)x(t),\begin{aligned} \dot{x}(t) &= A(t)x(t) + F(t)u(t) + D(t)w(t), \ y(t) &= C(t)x(t), \end{aligned} where xRnx\in\mathbb{R}^n is the state vector, u(t)Rqu(t)\in\mathbb{R}^q is a known input, w(t)Rmw(t)\in\mathbb{R}^m is an unknown bounded input or disturbance, and y(t)Rry(t)\in\mathbb{R}^r is the measured output (Tranninger et al., 2018).

The basic SMO construction consists of a nominal observer (e.g., a Luenberger-like design) augmented with a discontinuous or saturating injection term: x^˙=A(t)x^+F(t)u(t)+L(t)(yC(t)x^)+SM injection.\dot{\hat{x}} = A(t)\hat{x} + F(t)u(t) + L(t)(y - C(t)\hat{x}) + \text{SM injection}. The injection term is purposely non-smooth (e.g., sign, saturation, or higher-order sliding function) and acts on the output estimation error (sliding variable), forcing the trajectory onto a sliding manifold s=ey=C(xx^)=0s=e_y=C(x-\hat x)=0 in finite time. Depending on the problem class (time-varying, uncertain, nonlinear, fractional-order), the injection may be implemented via first-order or higher-order sliding algorithms (e.g., Levant’s super-twisting, robust exact differentiators) (Tranninger et al., 2018, Mousavi et al., 2017).

2. Finite-Time and Robust Convergence

Finite-time convergence is a fundamental property of SMOs, deriving from the injection’s ability to dominate the effect of bounded matched disturbances. In linear, time-varying settings, robustness and convergence are certified using Lyapunov analysis, often relying on contraction LMIs or exponential Lyapunov exponents for non-autonomous systems.

  • In higher-order SMOs, a sliding-mode differentiator reconstructs the derivatives of the sliding variable up to the observability index, and state correction is computed by explicit formulae exploiting system observability structure. The error can be brought to zero exactly after a finite reaching time tft_f (Tranninger et al., 2018).
  • For a scalar output f0(t)f_0(t) with a bounded ν\nu-th derivative, the higher-order robust exact differentiator ensures that each order estimate ziz_i converges exactly to f0(i)f_0^{(i)} in finite time, given appropriate gains and upper bounds (Tranninger et al., 2018).
  • In observer chains of length ν\nu (observability index), a cascade of higher-order sliding-mode differentiators enables recovery of both the initial state estimation error and the unknown input, enabling exact state reconstruction under suitable strong observability and boundedness conditions.

3. Conditions for Stability and Robustness

The stability and effectiveness of SMOs depend on several structural and spectral properties of the system:

  • Directional Detectability and Strong Observability: For time-varying systems, convergence requires (a) that the system (A(t),C(t))(A(t), C(t)) be directionally detectable for every non-negative Lyapunov exponent and (b) strong forward regularity of the underlying exponents. Directional detectability is formalized by ensuring the averaged diagonal elements of a QR-decomposed output map remain positive on non-stable directions; this ensures output injection quickly suppresses any non-asymptotically stable modes (Tranninger et al., 2018).
  • BIBS (Bounded-Input Bounded-State) Stability: With proper gain selection such that the estimate-correction loop dominates the negative exponents (in the sense of diagonalization along non-stable subspaces), the full error system with unknown input is BIBS-stable: e(t)c(wˉ,e(t0))\|e(t)\|\le c(\bar{w}, e(t_0)) for all tt0t\ge t_0 (Tranninger et al., 2018).
  • Finite-Time Sliding and Input Reconstruction: Once on the sliding manifold, exact state and input recovery is possible if the system’s strong observability index equals the differentiator order and if the system admits a suitable factorization for reconstructing state error from output error derivatives (Tranninger et al., 2018).

4. Observer Architecture and Cascade Correction

The overall SMO design typically follows a cascaded architecture:

  • Nominal Observer: Computes a preliminary estimate x~(t)\tilde x(t) using high-gain correction based on the output error.
  • Sliding-Mode Corrector: Applies a chain of sliding-mode differentiators to the output error, reconstructing its time derivatives up to the system's observability index.
  • Error Estimation and State Correction: Employs an explicit (often algebraic) formula involving the reconstructor matrix He1(t)H_e^{-1}(t) and the concatenated vector of differentiated output errors to recover the current state estimation error. The final estimate is updated as x^(t)=x~(t)+e~(t)\hat{x}(t)=\tilde{x}(t)+\tilde{e}(t), resulting in exact state matching after finite correction time.

This structure separates the BIBS/ISS robustness properties of the high-gain stage from the precise reconstruction property of the higher-order correction (Tranninger et al., 2018).

5. Parameter Tuning and Practical Implementation

Effective parameterization is crucial:

  • Observer Gain (pp): Must be sufficiently large so that, for each non-stable mode associated with non-negative Lyapunov exponent λj\lambda_j, the negative feedback pR~jj-p\overline{\widetilde R}_{jj} turns the net exponent negative.
  • Differentiator Order (ν\nu): Should be set to the observability index of (A,D,C)(A,D,C).
  • Differentiator Gains (λi\lambda_i): Selected using known tables (e.g., Levant’s exponentiation sequences), and the bound LL must safely exceed any anticipated maximum derivative magnitude.
  • QR Initialization: Initial Q(t0)Q(t_0) is chosen as a random orthonormal basis; dimensionality kk of the non-stable subspace is determined from signs of finite-time Lyapunov estimates.
  • Matrix Regularity: Must ensure CTCQC^T C Q is full rank in all required directions; verify directional detectability accordingly.

Numerical integration is typically carried out with projected Runge-Kutta methods for the QR-ODE and standard RK4 for the remaining equations, with sub-millisecond step sizes attainable in practical applications (Tranninger et al., 2018).

6. Numerical Example and Performance Metrics

A representative simulation as reported in (Tranninger et al., 2018):

  • System: n=8n=8 states, non-stable subspace dimension k=2k=2, observability index ν=2\nu=2.
  • Time-varying A(t)A(t) (e.g., sinusoidal entries), full column DR8×1D\in\mathbb{R}^{8\times1}, C=[I40]C=\begin{bmatrix} I_4 & 0 \end{bmatrix}.
  • Unknown input: w(t)=0.3+10sin(0.2πt)+3sin(0.8πt)w(t)=0.3 + 10\sin(0.2\pi t) + 3\sin(0.8\pi t).
  • Observer gains: p=30p=30, differentiator order $1$, λ0=1.1,λ1=1.5\lambda_0=1.1, \lambda_1=1.5.
  • Integration step size: $1$ ms; finite-time differentiator converges in 0.1\approx 0.1 s; post-correction estimation error xx^103\|x-\hat{x}\|\sim 10^{-3}; with measurement noise standard deviation σ=103\sigma=10^{-3}, estimation error remains below 10210^{-2}.

This confirms finite-time exact state reconstruction, robustness to unknown bounded disturbances, and noise immunity within a prescribed upper bound.

7. Theoretical and Practical Significance

The SMO framework described in (Tranninger et al., 2018) extends sliding-mode techniques to non-autonomous, time-varying systems subject to unknown bounded inputs, overcoming classical restrictions of matching, strong observability, and input structure. The method separates fast, robust contraction of the error (to a boundary layer) from exact reconstruction (via higher-order sliding-mode differentiators), enabling practical, noise-robust, and universally convergent observer designs for systems where conventional linear and adaptive observers fail to deliver finite-time performance. The approach is also modular, allowing the practitioner to adapt the differentiator order and gains to specific system structure and observability characteristics.

References:

(Tranninger et al., 2018) Non-Uniform Stability, Detectability, and, Sliding Mode Observer Design for Time Varying Systems with Unknown Inputs

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Sliding-Mode Observer (SMO).