Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
118 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
24 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Extended Dynamic Mode Decomposition

Updated 26 July 2025
  • Extended Dynamic Mode Decomposition (EDMD) is a data-driven method that constructs finite-dimensional approximations of the Koopman operator to study nonlinear dynamics.
  • It projects observables onto tailored subspaces, enabling spectral analysis, model reduction, forecasting, control, and system identification.
  • Advancements like dictionary learning and structure-preserving variants enhance EDMD's convergence, interpretability, and applicability to complex systems.

Extended Dynamic Mode Decomposition (EDMD) is a family of data-driven operator-theoretic algorithms for constructing finite-dimensional approximations to the Koopman operator—a linear but infinite-dimensional operator governing the evolution of observables in nonlinear dynamical systems. By projecting the action of the Koopman operator onto a user-chosen subspace of observables, EDMD enables spectral analysis, model reduction, forecasting, control, and system identification of deterministic and stochastic dynamics. The method generalizes Dynamic Mode Decomposition (DMD) by leveraging flexible dictionaries of observables, allowing for rich representations of nonlinear phenomena. Rigorous convergence theory, scalable implementations, integration with machine learning, and structure-preserving variants have significantly broadened the scope and applicability of EDMD.

1. Theoretical Foundations and Algorithmic Structure

EDMD approximates the Koopman operator K\mathcal{K} by finite-dimensional surrogates constructed from data. Given a dictionary Ψ={ψ1,,ψN}\Psi = \{\psi_1, \ldots, \psi_N\} of observables and MM snapshot pairs (xi,yi)(x_i, y_i)—typically with yi=T(xi)y_i = T(x_i), where TT is the system evolution map—the core EDMD matrix KN,MK_{N,M} is computed to best satisfy Ψ(yi)KN,MΨ(xi)\Psi(y_i) \approx K_{N,M} \Psi(x_i) in a least-squares sense: KN,M=G+A,K_{N,M} = G^+ A, with Gij=1Mk=1Mψi(xk)ψj(xk),    Aij=1Mk=1Mψi(xk)ψj(yk)G_{ij} = \frac{1}{M}\sum_{k=1}^M \psi_i(x_k)\psi_j(x_k), \;\; A_{ij} = \frac{1}{M}\sum_{k=1}^M \psi_i(x_k)\psi_j(y_k), and G+G^+ the Moore–Penrose pseudoinverse. The nature, span, and structure of the dictionary Ψ\Psi critically shape both the approximation and its interpretability.

With independently or ergodically sampled data points (from some measure μ\mu), as MM \rightarrow \infty, the EDMD operator KN,MK_{N,M} converges, with probability one, to the L2(μ)L_2(\mu)-orthogonal projection KNK_N of the Koopman operator K\mathcal{K} onto the subspace FN=span{ψ1,...,ψN}F_N = \mathrm{span}\{\psi_1, ..., \psi_N\} (Korda et al., 2017). When Ψ\Psi is chosen from an orthonormal basis for L2(μ)L_2(\mu) and NN\to\infty, KNK_N converges in the strong operator topology to K\mathcal{K}. This hierarchical limit justifies the finite-dimensional approximation as a computationally tractable surrogate.

2. Spectral Properties and Convergence Analysis

The finite-dimensional Koopman matrix KNK_N admits a spectrum (eigenvalues and eigenvectors) that approximates the spectral characteristics of the Koopman operator (Korda et al., 2017, Slipantschuk et al., 2019, Wormell, 2023). The following hold:

  • Accumulation points of the spectra of KNK_N correspond to genuine Koopman eigenvalues, and the associated eigenfunctions converge weakly in L2(μ)L_2(\mu).
  • For analytic, expanding maps or maps admitting an analytic extension, the finite-section EDMD can converge exponentially fast to the true spectrum, provided dictionary functions are analytic (e.g., monomials, Fourier basis) and sufficient sampling is used (Slipantschuk et al., 2019, Wormell, 2023, Akindji et al., 12 Apr 2024).
  • Spectral pollution—a phenomenon where spurious eigenvalues appear—can be avoided by projecting onto polynomial subspaces via orthogonal (Taylor-type) projections rather than merely L2L^2-Galerkin projections (Mauroy et al., 24 May 2024).
  • In measure-preserving and unitary settings, enforcing constraints such as KGK=G\mathcal{K}^* G \mathcal{K} = G at the discrete level (i.e., using mpEDMD) ensures the spectral structure (e.g., eigenvalues on the unit circle, energy conservation) is preserved (Colbrook, 2022).

The following table contrasts key convergence properties in several regimes:

Setting Dictionary/Space Spectral Convergence Rate
Analytic maps, OPUC Trigonometric polys Exponential in NN
Generic nonlinear system Monomials/RBFs Typically subexponential
Measure-preserving General, w/ Procrustes Weak convergence, spectrum on T\mathbb{T}
kEDMD + Wendland kernel RKHS native space Algebraic in fill-distance hh

The exponential convergence of EDMD to the true spectral data requires careful selection of both the observable space (analytic, matched to dynamics) and, for collocation-type variants, rapid scaling of sampling points with the number of observables (Akindji et al., 12 Apr 2024).

3. Design of Observables and Dictionary Learning

The choice of observables (dictionary functions) is central to EDMD's accuracy, interpretability, and convergence. Approaches include:

(K,θ)=argminK,θnΨ(yn;θ)KΨ(xn;θ)2+λKF2,(K, \theta) = \underset{K, \theta}{\mathrm{argmin}}\, \sum_n \|\Psi(y_n; \theta) - K\Psi(x_n; \theta)\|^2 + \lambda \|K\|_F^2,

with Ψ(x;θ)\Psi(x;\theta) parameterized by a neural network.

  • Analytical construction via Lie derivatives: suitable for systems with non-polynomial nonlinearities (e.g., sine, cosine), this method lifts system states to a polynomial system, providing physically interpretable, closed-form observables (Netto et al., 2020).
  • Sparse or convolutional representations: Convolutional sparse coding (CSC-DMD) integrates spatial dictionary learning with EDMD to efficiently encode and predict spatially structured phenomena, such as riverbed estimations from surface measurements (Kaneko et al., 2018).

Dictionary learning enables reduction of the number of required observables for a given prediction accuracy and tailors representations to the underlying dynamics (Li et al., 2017, Alford-Lago et al., 2021, Terao et al., 2021).

4. Structure-Preserving and Symmetry-Aware Extensions

Advanced EDMD variants address intrinsic system structures:

  • Measure-preserving truncations (mpEDMD): Imposing unitarity or isometry in the finite-dimensional approximation through an orthogonal Procrustes problem guarantees convergence of spectral measures, spectrum, and Koopman modes; avoids spectral pollution; and ensures robustness to noise and high-dimensionality (Colbrook, 2022).
  • Symmetry integration via group convolutions: For systems and observables with symmetry, the EDMD matrix can be parameterized by a group-convolutional structure, drastically lowering sample and parameter requirements and enabling efficient predictions and eigenfunction computations via the generalized Fourier transform (Harder et al., 1 Nov 2024). A finite group's action partitions observables and state, and the equivariant EDMD matrix can be realized as a group convolution kernel AA such that

Ksg,sh=As,s(gh1),K_{s \cdot g, s' \cdot h} = A_{s, s'}(g h^{-1}),

with computation and eigendecomposition implemented efficiently in Fourier space.

  • Modular EDMD for interconnected systems: By leveraging network (graph) structure, local Koopman generators are learned for subsystems, which are then coupled appropriately. This modularization mitigates the curse of dimensionality and facilitates transfer learning and plug-and-play adaptation to topology changes (Guo et al., 22 Aug 2024).
  • Kernel extensions (kEDMD): Utilizing canonical features of positive definite kernels (e.g., Wendland) anchors the approximation in RKHSs where rigorous LL^\infty error bounds are available (Köhne et al., 27 Mar 2024). The method enables uniform convergence rates and avoids ambiguity in dictionary selection.

5. Applications: Estimation, Prediction, and Control

EDMD underpins diverse applications in science and engineering:

  • Forecasting and finite-horizon prediction: For any fL2(μ)f\in L_2(\mu), the sequence KNifK_N^i f approximates KifK^i f as NN\to\infty, uniformly over finite time windows. This theoretical guarantee enables forecasting in high-dimensional, nonlinear systems, including off-attractor behaviors (Korda et al., 2017, Wormell, 2023, Netto et al., 2020).
  • Estimation and data assimilation: EDMD and machine learning–augmented variants provide tractable frameworks for observer design in estimation tasks, e.g., reconstructing unmeasured states (such as river bed profiles) from observable surrogates (Kaneko et al., 2018).
  • Closed-loop and model predictive control: Data-driven MPC using EDMD surrogates achieves "practical asymptotic stability" under cost-controllability conditions, with explicit error bounds on the closed-loop residual induced by the surrogate approximation (Bold et al., 2023). The use of Koopman eigenfunctions as lifting coordinates further enhances prediction and control capabilities for nonlinear systems, enabling linear controller synthesis for otherwise nonlinear targets (Folkestad et al., 2019).
  • Spectral decomposition and mode analysis: The identification of Koopman eigenvalues and modes enables detection of almost-invariant sets, decay rates, and spectral signatures critical for reduced-order modeling, turbulence analysis, and coherent structure extraction (Slipantschuk et al., 2019, Wormell, 2023, Colbrook, 2022).

A key practical consideration is that the observable dictionary, sampling density, and whether relevant system structure (e.g., measure-preservation or symmetry) is inherited in the finite-dimensional approximation, critically affect the quality and robustness of predictions.

6. Quantitative Error Bounds and Regularization

The reliability of EDMD is now underpinned by rigorous quantitative error analyses:

  • Pointwise and uniform error bounds: For kernel EDMD, using e.g. Wendland kernels, explicit LL^\infty bounds link the approximation error to the fill distance and smoothness parameter of the kernel, showing rate constants of the form Chk+1/2Ch^{k+1/2}, where kk is the kernel's smoothness (Köhne et al., 27 Mar 2024).
  • Spectral convergence rates: For analytic maps with trigonometric or monomial dictionaries, the error in Koopman eigenvalue approximation (and projection error) converges exponentially in the dictionary size NN (Wormell, 2023, Akindji et al., 12 Apr 2024).
  • Quantization robustness: Quantizing the measurements entering EDMD introduces an error decaying linearly with the quantization resolution ϵ\epsilon in finite-data settings, and acts as a regularization in the large-data limit, with higher-order terms scaling as O(ϵ2)O(\epsilon^2) (Maity et al., 19 Sep 2024). Smoother dictionary functions (with smaller Lipschitz constant) further suppress this error.
  • Invariance quality: The forward-backward consistency index measures the defect of invariance of the dictionary under the approximate Koopman operator; it is invariant under basis change and sharply upper-bounds the worst-case relative root mean square prediction error across the dictionary span (Haseli et al., 2022).

These analyses inform practical decisions about sample sizes, dictionary complexity, kernel choice, quantization, and regularization in EDMD-based pipelines.

7. Extensions and Future Directions

Several research avenues are informed by the rapidly evolving theory and implementations:

  • Integration with deep learning: Neural dictionary learning (including autoencoder-based and neural ODE–based models) enables discovery of efficient and expressive representations for observables, with proven efficiency gains and improved performance in high-dimensional regimes (Li et al., 2017, Alford-Lago et al., 2021, Terao et al., 2021).
  • Structure- and physics-informed modeling: The incorporation of measure preservation, symmetries, and modularity directly into the operator approximation provides enhanced fidelity and computational savings for large-scale and complex systems (Colbrook, 2022, Harder et al., 1 Nov 2024, Guo et al., 22 Aug 2024).
  • Kernel and function space design: Quantitative RKHS-based error bounds stimulate the search for kernels (or other dictionaries) that match the system's regularity for sharper convergence, and motivate comparative paper of polynomial, Bernstein, and kernel feature dictionaries (Köhne et al., 27 Mar 2024).
  • High-dimensional and chaotic applications: Advances in data-driven spectral analysis of turbulent flows, climate models, chemical reactors, and complex networks are now enabled by scalable, robust variants of EDMD and its relatives (Colbrook, 2022, Guo et al., 22 Aug 2024).
  • Stochastic and control-theoretic extensions: Adaptation of EDMD to stochastic systems, analysis of backward Kolmogorov operators, and the development of Koopman-based model-predictive and robust controllers represent fertile ground for operator-theoretic learning.

A rigorous understanding has now emerged regarding the interaction of dictionary choice, sampling, structure preservation, and statistical properties, enabling EDMD to function as a foundational tool for modern computational dynamics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)