Koopman-Based Auto-Regressive Model
- Koopman-based auto-regressive models are data-driven predictors that lift nonlinear states to a linear observable space.
- They employ preselected or learned dictionaries and techniques like EDMD to approximate the Koopman operator for accurate forecasting.
- These models enable robust prediction, control, and uncertainty quantification across various dynamical systems despite challenges in observable selection.
A Koopman-based auto-regressive model is a class of data-driven predictors for nonlinear dynamical systems in which the state is “lifted” to a space of observables where the time evolution is (approximately) governed by a finite-dimensional linear operator. This approach fuses operator-theoretic dynamical systems concepts with modern machine learning, system identification, and time-series forecasting tools. It seeks to exploit the Koopman operator’s property of linear propagation on observables, allowing nonlinear evolution in original state space to be mapped to a (potentially infinite-dimensional) linear dynamic that admits accurate, interpretable, and often computationally efficient auto-regressive prediction.
1. Koopman Operator Principle and Lifting
Given a discrete-time system , the Koopman operator acts on observable functions by composition: . The crucial insight is that, despite being nonlinear, is linear (albeit infinite dimensional) and can be spectrally decomposed into eigenfunctions and eigenvalues , as in (Bevanda et al., 2021).
Auto-regressive forecasting is realized by constructing or learning a set of measurement functions (observables) that capture the dominant dynamics such that their evolution is (approximately) linear and governed by a finite-dimensional approximation of the Koopman operator, . In practice, this is facilitated by either preselecting a dictionary (e.g., monomials, wavelets, radial basis functions) or learning the observables using neural networks (N. et al., 2022, Uchida et al., 4 Dec 2024).
2. Koopman Operator Approximations and AR Model Formulation
After lifting, the dynamics in the observable space can be represented as an auto-regressive (AR) model: where , approximates the Koopman operator on the span of the observables, and projects back to measured outputs (Bevanda et al., 2021, Snyder et al., 2021).
The Extended Dynamic Mode Decomposition (EDMD) (Snyder et al., 2021) provides a widely used data-driven algorithm: for snapshots and a dictionary , it solves a least-squares regression to find best-fit . Alternatively, when using measurement functions learned by neural nets, joint optimization of the measurement map and the Koopman operator is performed (Uchida et al., 4 Dec 2024, N. et al., 2022).
For controlled systems, the lifted, embedded dynamics with inputs are of the form
and possibly with a state/input-dependent input matrix (Linear Parameter-Varying—LPV—forms) when the dynamics are not strictly LTI in the lifted space (Iacob et al., 2022).
3. Model Learning, Regularization, and Ensemble Methods
Finite-dimensional Koopman approximations inherently involve model reduction and selection tradeoffs. The choice of basis or learned observables critically affects approximation accuracy, with convergence rates and bias–variance trade-offs characterized via function space regularity and sampling theory (Kurdila et al., 2018, Uchida et al., 4 Dec 2024).
- Koopman Regularization: Introduces constrained optimization to extract functionally independent Koopman eigenfunctions from sparse/corrupted data, enforcing linear evolution in the lifted space and “parsimony” (minimal sufficient observables) (Cohen, 17 Mar 2024).
- Model Averaging: To compensate for inaccuracies from any single model, an ensemble of Koopman models may be learned—each with its own observables and linear maps—and their predictions combined via Bayesian Model Averaging, yielding weighted linear embedding models that improve robustness and generalizability (Uchida et al., 4 Dec 2024).
- Uncertainty Quantification: Probabilistic learning frameworks estimate prediction uncertainty; e.g., Bayesian neural networks or Gaussian processes in the lifting function, with mechanisms such as Wasserstein-distance regularization to maintain model confidence (Lian et al., 2021).
4. Koopman-Based Auto-Regression in Deep Learning and RNNs
Modern implementations frequently integrate Koopman-theoretic models with deep learning architectures:
- Koopman Autoencoders: Nonlinear encoders lift data to a latent space where evolution is via a learned linear operator; decoders map back to state space. The Consistent Koopman Autoencoder includes explicit forward/backward operators and a consistency loss to ensure invertibility and robust long-horizon prediction (Azencot et al., 2020). The Koopman Invertible Autoencoder employs invertible networks to guarantee bidirectional propagation and maintain reversibility (Tayal et al., 2023).
- Structured Koopman Linear RNNs: The Structured Koopman Operator Linear RNN (SKOLR) models lagged input as an extended state, equating structured Koopman operator action with linear RNN update dynamics, enabling highly parallelized and scalable time-series forecasting (Zhang et al., 17 Jun 2025).
- Koopman-based Deep Estimators for Control: Hybrid architectures apply Koopman-based estimators with reinforcement learning agents, where the linear Koopman model captures tractable dynamics, and a policy network learns the residual correction for improved state estimation or control (Sun et al., 1 May 2024).
5. Spectral and Universal Approximation Properties
The richness and generality of Koopman-based AR models hinge on the expressive capacity of the finite-dimensional approximant space:
- Koopman Kernel Regression (KKR): Constructs a universal, Koopman-invariant RKHS that guarantees approximation of arbitrary LTI flows in the embedded space and supports statistical learning convergence guarantees (Bevanda et al., 2023).
- Deep Koopman-layered Models with Toeplitz Matrices: These models represent the evolution in a basis of Fourier functions via products of Toeplitz matrices—leveraging their universal factorization property to ensure the capacity to approximate any time-series transformation arbitrarily well within an RKHS framework, and supporting nonautonomous/multistage systems (Hashimoto et al., 3 Oct 2024).
6. Applications and Modeling Scenarios
Koopman-based auto-regressive models are applicable across a broad range of scientific and engineering domains:
- Prediction and Control of Nonlinear Dynamics: Koopman-based AR models support efficient forecasting, long-horizon simulation, and feedback control in systems ranging from fluid flows and neural activity to mechanical oscillators (Snyder et al., 2021, N. et al., 2022, Sun et al., 1 May 2024).
- Robust Reliability Analysis and Uncertainty Quantification: Deep Koopman architectures enable accurate time-dependent reliability analysis under uncertainties, outperforming conventional autoregressive neural networks and LSTM ensembles in out-of-distribution generalization (N. et al., 2022).
- Systems with Non-Stationarity and Memory: Hierarchical, block-residual architectures (such as Koopa) model both global and local/rapidly varying dynamics in non-stationary time-series, integrating spectral disentanglement and blockwise Koopman predictors (Liu et al., 2023). Episodic memory mechanisms supplement Koopman learning to enhance prediction by recalling and leveraging past episodes similar to the current state (Redman et al., 2023).
- Neural Representation Dynamics: Koopman autoencoders model the progression of internal representations in deep networks, providing analytic surrogates for model editing, interpolation, or targeted class unlearning with explicit preservation of topological characteristics of feature space (Aswani et al., 19 May 2025).
- Reinforcement Learning and Value Function Computation: Koopman operator lifting (with control parameterization as a Koopman tensor) allows the BeLLMan or Hamilton–Jacobi–BeLLMan equations to be posed and solved in a linear observable space, underpinning sample-efficient and interpretable RL algorithms (Rozwood et al., 4 Mar 2024).
7. Limitations and Open Challenges
While Koopman-based auto-regressive models offer a principled and often computationally efficient way to model nonlinear dynamics, their practical success is bounded by several challenges:
- Approximation error due to truncation or suboptimal choice of observables, especially in high-dimensional or strongly nonlinear regimes (Kurdila et al., 2018, Bevanda et al., 2021).
- Trade-off between bias (finite-dimension error) and variance (data-induced error), as characterized by statistical learning theory (Kurdila et al., 2018).
- Model selection, particularly the construction of Koopman-invariant dictionaries, remains nontrivial and essential for good generalization (Bevanda et al., 2021, Cohen, 17 Mar 2024).
- Controlled systems may require LPV or even bilinear representation in the lifted space; enforcing or learning admissible input coupling is an ongoing research concern (Iacob et al., 2022).
- Computational efficiency can be threatened by high-dimensional embedding or evaluation of matrix exponentials, though recent advances (e.g., Krylov subspace methods for exponentials of Toeplitz matrices) provide partial mitigation (Hashimoto et al., 3 Oct 2024).
Koopman-based auto-regressive modeling thus stands at the intersection of dynamical systems theory, system identification, functional analysis, and machine learning, with ongoing research extending its reach to more complex, high-dimensional, and uncertain systems.