Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 129 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Koopman Operator Prediction

Updated 16 September 2025
  • Koopman operator prediction is a mathematical framework that transforms nonlinear system dynamics into a linear evolution through data-driven eigenfunction construction.
  • It leverages spectral properties and convex optimization to lift state variables into high-dimensional observable spaces for precise trajectory and state estimation.
  • The approach is applicable to control, turbulence modeling, and system identification, integrating theoretical density guarantees with practical model predictive control.

Koopman operator prediction is a mathematical, computational, and data-driven framework for forecasting the evolution of nonlinear dynamical systems by employing a linear operator acting on an (often infinite-dimensional) space of observables. This approach leverages the spectral properties of the Koopman operator to construct linear models that, when combined with appropriate lifting functions (eigenfunctions or learned embeddings), can accurately predict both trajectories and statistical properties of complex systems. Koopman-based prediction has found applications in system identification, control, turbulence modeling, uncertainty quantification, epidemic forecasting, and nonlinear state estimation, and underpins several modern advances in machine learning and data-driven modeling of nonlinear phenomena.

1. Koopman Eigenfunction Construction and Linearization

The construction of Koopman eigenfunctions is central to casting nonlinear system evolution into a linear framework. The data-driven methodology introduced by Korda and Mezić (Korda et al., 2018) eschews fixed dictionary selection by defining Koopman eigenfunctions as follows. Given a non-recurrent set Γ and a collection of continuous boundary functions gC(Γ)g \in C(\Gamma), for each candidate eigenvalue λC\lambda \in \mathbb{C} the eigenfunction on the flow-generated domain XTX_T is constructed as

φ(λ,g)(x)=eλτ(x)g(Sτ(x)),\varphi_{(\lambda,g)}(x) = e^{-\lambda \tau(x)} g(S_\tau(x)),

where τ(x)\tau(x) is the (backward) hitting time of Γ\Gamma along the flow St(x)S_t(x). This construction solves the differential equation

ddtφ(St(x))=λφ(St(x))\frac{d}{dt}\varphi(S_t(x)) = \lambda \varphi(S_t(x))

with initial condition gg on Γ\Gamma. The richness of the Koopman spectrum away from attractors means that, via suitable optimization over λ\lambda and gg, a dense set of eigenfunctions can be generated. The mapping Lλg=eλτ(gSτ)L_\lambda g = e^{-\lambda\tau}(g \circ S_\tau) is linear in gg, allowing convex optimization (e.g., 1/2\ell_1/\ell_2-regularized least squares) to minimize projection error of observable quantities onto the eigenfunction span. The selection of λ\lambda is non-convex but tractable, and can be optimized via gradient-based methods.

Generalized eigenfunctions associated with Jordan blocks J(λ)J_{(\lambda)} can similarly be constructed: [ψ(λ,g1)(St(x)),...]T=eJ(λ)t[g1(x),...]T,[\psi_{(\lambda,g_1)}(S_t(x)), ...]^T = e^{J_{(\lambda)} t} [g_1(x), ...]^T, yielding richer invariant subspaces and enabling linear prediction even under spectral degeneracy.

2. Data-Driven Prediction and Control: Linear Lifting and MPC

After constructing a set of Koopman eigenfunctions {φ1,...,φN}\{\varphi_1, ..., \varphi_N\}, the state xx is lifted to z=φ(x)z = \varphi(x), leading to an exactly linear evolution in the absence of control: z˙=Az,A=diag(λ1,...,λN),y=Cz,\dot{z} = A z, \quad A = \mathrm{diag}(\lambda_1, ..., \lambda_N), \quad y = C z, where CC is obtained by solving an optimization (typically quadratic) to minimize the error in approximating the observable of interest.

For control-affine systems x˙=f(x)+Hu\dot{x} = f(x) + H u, the lifted dynamics become

z˙=Az+Bu,y=Cz.\dot{z} = A z + B u, \quad y = C z.

Here, AA and CC are fixed from the uncontrolled system, and BB is fitted using convex multi-step least squares on data with nonzero input, minimizing the multi-step prediction error. The predictor structure, determined entirely by data-driven convex optimization, is then exploited in a model predictive control (MPC) framework: the finite-horizon optimal control problem is formulated as a convex quadratic program over the lifted system, allowing direct application of linear MPC tools for controlling nonlinear dynamics.

3. Theoretical Guarantees: Density, Generalized Eigenfunctions, and Invariant Subspaces

The foundational result (Theorem 3.1 in (Korda et al., 2018)) states that, for any set Λ0C\Lambda_0 \subset \mathbb{C} (containing at least one eigenvalue with nonzero real part) and a countable, dense family GC(Γ)G \subset C(\Gamma), the span of the generated eigenfunctions becomes dense in C(XT)C(X_T). Thus, every continuous observable in a sufficiently reachable domain XTX_T can be approximated arbitrarily well in the Koopman invariant subspace generated via this construction.

The extension to generalized eigenfunctions—via chains parametrized by Jordan matrices J(λ)J_{(\lambda)}—allows for the construction of invariant subspaces even in the presence of non-trivial algebraic or geometric multiplicity, thereby supporting linear prediction, spectral analysis, and model reduction tasks.

4. Comparative Numerical Results, Prediction Accuracy, and Feedback Control

Numerical examples in (Korda et al., 2018) demonstrate the practical efficacy of the eigenfunction-based linear predictors in both prediction and control:

  • Van der Pol oscillator: Lifting from a non-recurrent set yields accurate state prediction; the error is minimized via eigenvalue optimization.
  • Damped Duffing oscillator: The framework captures multi-modal equilibrium structure, phase transitions (e.g., stabilization at unstable equilibria via Koopman MPC), and enables successful tracking between equilibrium points.

Performance is compared between predictors using “as-is” eigenvalues (e.g., from DMD) and those using optimized eigenvalues (via nonconvex, gradient-based search), with the latter showing substantially lower prediction errors. Closed-loop control experiments on the Duffing system illustrate effective MPC deployment, with solution times and closed-loop behavior reflecting both computational efficiency and feedback robustness.

Moreover, the modularity of the construction permits post hoc enrichment of the predictor basis (by exploiting the algebraic closure of eigenfunctions under products and powers) without recomputation of the entire model. All examples are accompanied by publicly released Matlab code, ensuring reproducibility and accessibility.

5. Structural and Algorithmic Features

Key attributes of the framework are:

  • Dictionary-free construction: No fixed set of basis functions is required; eigenfunctions are defined via boundary data and optimized globally.
  • Convexity: All stages (except eigenvalue search) are cast as convex optimizations — specifically, linear/quadratic programs or convex least squares.
  • Generalizability: The architecture extends naturally to controlled settings, multi-step prediction, and model predictive control.
  • Modularity: Eigenfunctions for additional observables or interpolation at new points can be generated on demand, reflecting the algebraic structure of the Koopman operator.
  • Theoretical rigor: Density results guarantee that the constructed model can, in principle, capture any target observable up to required fidelity.

6. Limitations and Deployment Considerations

Potential limitations include:

  • Dependence on non-recurrent set selection: The quality and reachability of the non-recurrent set Γ\Gamma affects coverage in state space.
  • Necessity of sufficient data: Approximating the Koopman invariant subspace requires adequate excitation and coverage via data.
  • Choice of eigenvalues: While the optimization of λ\lambda is low-dimensional and tractable, the non-convexity can potentially lead to sub-optimal local minima.
  • Scalability: For high-dimensional systems, computational cost scales with the number of eigenfunctions and the complexity of the optimization landscape.
  • Applicability near attractors: The approach leverages spectral richness away from attractors; on or near attractors the spectrum condenses and richer sets of eigenfunctions may need to be computed.

Deployment in real-world settings benefits from the framework’s convexity and dictionary-free design, which avoid issues of basis selection and non-convex neural network training. The method is directly compatible with existing convex optimization libraries and can be extended to larger systems given sufficient computational resources.

7. Impact and Integration with Broader Koopman Operator Research

The explicit, optimization-driven construction of eigenfunctions and prediction models offers a principled bridge between theoretical guarantees (density of eigenfunction spans) and practical, modular architectures for high-accuracy, data-driven prediction and control in nonlinear systems. By focusing on convexity, modularity, and post hoc extensibility, this method stands in contrast to black-box, non-convex machine learning techniques or heavily parametric approaches, and offers a robust foundation for ongoing developments in Koopman-based prediction, cavity model reduction, and data-driven control synthesis. The release of accompanying code further solidifies its position as a reference implementation in the field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Koopman Operator Prediction.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube