Papers
Topics
Authors
Recent
2000 character limit reached

Stable Estimator of Dynamical Systems (SEDS)

Updated 22 November 2025
  • SEDS is a framework for learning dynamical models from time series data that guarantees stability by projecting estimates onto sets of stable or energy-preserving dynamics.
  • It employs convex relaxations, sum-of-squares optimizations, and Lyapunov-based parameterizations to ensure asymptotic, bounded, or globally stable behavior in linear, nonlinear, and quadratic models.
  • Empirical studies demonstrate that SEDS outperforms traditional OLS and NARX approaches by achieving lower validation errors and enhanced regularization in diverse benchmark identification tasks.

A Stable Estimator of Dynamical Systems (SEDS) is a framework for learning dynamical models from time series data with explicit guarantees of stability—meaning that either the identified linear/nonlinear system is ensured to have suitable asymptotic (or bounded) behavior, or the estimated operator is projected onto a class of stable/energy-preserving dynamics. SEDS methodologies encompass convex relaxations for nonlinear models, information-theoretically optimal matrix projections for linear systems, and structured operator learning for quadratic and higher-order continuous-time systems. These approaches play a foundational role in identification, operator inference, and scientific machine learning where qualitative properties such as stability, boundedness, and regularization are key.

1. SEDS for Linear System Identification

The SEDS approach for linear systems is grounded in the projection of unconstrained least-squares system matrices onto the non-convex set of asymptotically stable matrices in an information-theoretic sense. Given a stable discrete-time system xt+1=Θtrue xt+wtx_{t+1} = \Theta_{\text{true}}\ x_t + w_t with ρ(Θtrue)<1\rho(\Theta_{\text{true}}) < 1 and i.i.d.\ noise wtw_t, the ordinary least squares (OLS) estimator Θ^T\widehat{\Theta}_T does not, in general, yield a stable matrix. SEDS remedies this by solving the reverse II-projection:

P(A)argminAΘI(A,A)\mathcal P(A')\in \arg\min_{A\in\Theta} I(A',A)

where I(A,A)=12Tr[Sw1(AA)SA(AA)]I(A',A) = \frac{1}{2}\mathrm{Tr}[S_w^{-1}(A'-A) S_A (A'-A)^\top], and SAS_A is the solution to the associated Lyapunov equation SA=ASAA+SwS_A = A S_A A^\top + S_w for AΘA\in\Theta. This projection is optimal in a moderate deviations/large deviations sense and can be computed in closed form by solving a corresponding Linear Quadratic Regulator (LQR) problem and applying the resulting optimal feedback gain:

Aproj=(I+R1P)1AA_{\text{proj}} = (I + R^{-1}P)^{-1}A'

where PP solves the algebraic Riccati equation and R=(2δSw)1R=(2\delta S_w)^{-1} with small δ\delta (Jongeneel et al., 2021).

The SEDS process, given data {xt}\{x_t\}, comprises:

  1. Forming the OLS estimate Θ^T\widehat{\Theta}_T.
  2. Solving the algebraic Riccati equation for PP.
  3. Computing the stabilizing feedback gain KK^*.
  4. Defining Θ^TSEDS=Θ^T+K\widehat{\Theta}_T^{\mathrm{SEDS}} = \widehat{\Theta}_T + K^*.

This estimator is always asymptotically stable and statistically consistent.

2. SEDS with Stability-By-Design in Polynomial and Nonlinear Systems

In nonlinear system identification, SEDS can be realized through convex relaxations and sum-of-squares (SOS) techniques. The canonical model structure uses an implicit polynomial formulation:

e(xt+1)=f(xt,ut)yt=g(xt,ut)e(x_{t+1}) = f(x_t, u_t) \qquad y_t = g(x_t, u_t)

with e,f,ge, f, g multivariate polynomials affine in parameters θ\theta.

Global incremental 2\ell^2-stability is enforced by Lyapunov-based matrix inequalities. Explicitly, for P0P\succ0, stability constraints are encoded as pointwise linear matrix inequalities (LMIs) in (x,u)(x,u)—converted to a tractable sum-of-squares SDP representation with Gram matrix Q0Q\succeq0:

M(θ,P;x,u)0M(\theta, P; x, u) \succeq 0

where MM is a block matrix constructed from the Jacobians and PP. The resulting optimization jointly minimizes a convex surrogate of long-term simulation error via Lagrangian relaxation:

JLR(θ):=supΔ[GΔ+η22Δ(FΔϵ)]J_{\text{LR}}(\theta) := \sup_\Delta \big[\|\mathcal G\Delta + \eta\|^2 - 2\Delta^\top(\mathcal F \Delta - \epsilon)\big]

subject to the SOS/SDP stability constraints (Umenberger et al., 2018).

This methodology yields models guaranteed to be globally stable, with empirical evidence for strong generalization and regularization effects in benchmark mechanical and flexible-beam identification tasks.

3. SEDS in Quadratic and Operator-Inference Models

SEDS methodologies extend to continuous-time quadratic models and operator inference via Lyapunov-based parametrizations:

x˙(t)=Ax(t)+H[x(t)x(t)]\dot{x}(t) = A x(t) + H[x(t)\otimes x(t)]

where AA is n×nn\times n, HH is n×n2n\times n^2, and “\otimes” is the Kronecker product (Goyal et al., 2023).

Stability is imposed by directly parameterizing AA and HH so that a desired Lyapunov function is preserved. For local asymptotic stability, AA is factorized as A=(JR)QA = (J-R)Q, with JJ skew-symmetric and R,QR, Q symmetric positive-definite, so that V(x)=12xQxV(x)=\frac{1}{2}x^\top Q x is a Lyapunov function and V˙(x)<0\dot V(x) < 0 near x=0x=0. Global asymptotic stability adds the constraint that HH is energy-preserving, i.e., xQH(xx)=0x^\top Q H(x\otimes x)=0 for all xx.

For systems with only bounded attractors (e.g., Lorenz system), parameterization ensures all solutions remain within a compact trapping region by shifting the origin and ensuring Lyapunov decrease away from it.

Inference is carried out using an integral-form loss minimized through modern gradient-based solvers, with stability built into the parameterization, eliminating the need for explicit constraints or projections.

4. Efficient Algorithms and Complexity Reduction

In the nonlinear SEDS framework (e.g., LR-SEDS), the full problem is a smooth SDP solved via a path-following interior-point method. Linear equality constraints on θ\theta are eliminated so that optimization is over a reduced coordinate μ\mu, with a log-determinant barrier added to maintain feasibility. Each Newton step's cost is reduced from cubic O(T3)O(T^3) to linear O(T)O(T) in data length TT by exploiting block tridiagonal/banded structures of key matrices (notably WW in the Lagrangian relaxation), and by using the block-Thomas algorithm for linear solves (Umenberger et al., 2018).

For the linear SEDS, solving the Riccati equation and the final projection requires O(n3)O(n^3) operations due to reliance on Schur or QZ decomposition (Jongeneel et al., 2021). Quadratic/operator inference-based SEDS uses automatic differentiation through Runge-Kutta time integration and parametrization-based gradient flow, often solved with Adam and adaptive learning rates.

5. Empirical Performance and Regularization

Empirical studies consistently demonstrate that SEDS approaches outperform standard equation error or unconstrained OLS fits in settings where stability is important. In "Specialized Interior Point Algorithm for Stable Nonlinear System Identification," SEDS-LR models of nonlinear mechanical systems showed lower validation error—both in median and variance—compared to wavelet or polynomial NARX models, with no observed divergences, while NARX fits frequently led to unstable models. Similarly, stability constraints in SEDS act as an intrinsic regularizer, reducing the need for manual feature/subset selection (Umenberger et al., 2018).

In quadratic operator inference, enforcing Lyapunov structure yields accurate long-term prediction and preserves qualitative features (e.g., global decay, trapping regions) that are unattainable with unconstrained models. Numerical evidence includes performance on Burgers' equation, Chafee–Infante systems, Lorenz attractors, and energy-preserving MHD triads (Goyal et al., 2023).

6. Applications, Limitations, and Model Classes

SEDS is applicable across discrete-time and continuous-time, linear and nonlinear, autonomous and non-autonomous systems. In linear cases, statistical guarantees and estimation consistency have been established for single-trajectory sample regimes (Jongeneel et al., 2021). Nonlinear and operator-based SEDS are compatible with general polynomial or quadratic representations, leveraging sum-of-squares relaxations and structure-preserving parameterizations, including energy preservation and equilibrium shifts.

A key limitation in high-dimensional nonlinear SEDS remains the computational burden of large SDPs and the potential conservatism of global Lyapunov-based constraints. Model structure selection—linear, polynomial, quadratic—is often determined by the system's physics or expressive power required for accurate prediction. For non-equilibrium or chaotic dynamics, operator-inference SEDS can guarantee boundedness even in the absence of true equilibria (Goyal et al., 2023).

7. Connections to Broader System Identification and Scientific ML

SEDS approaches tie together classical robust control, operator-theoretic methods, and modern scientific machine learning. Techniques such as SOS optimization, Riccati-based LQR design, and structure-preserving operator learning situate SEDS at the intersection of identification, model reduction, and qualitative physics preservation. SEDS has been empirically compared to RIE, stable subspace-ID, SINDy, and unconstrained operator-inference in accuracy, stability, and regularization properties (Umenberger et al., 2018, Goyal et al., 2023). The domain continues to evolve with advances in structured learning, scalable convex optimization, and stability-driven regularization frameworks.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Stable Estimator of Dynamical Systems (SEDS).