Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Extended Dynamic Mode Decomposition (EDMD)

Updated 15 September 2025
  • Extended Dynamic Mode Decomposition is a data-driven method that lifts state data into a subspace of observables to approximate the Koopman operator.
  • It employs fixed, kernel, or machine-learned dictionaries to extract spectral quantities for applications like model reduction, forecasting, and control.
  • EDMD integrates computational techniques such as random kernel approximations and tensor decompositions to efficiently analyze high-dimensional, nonlinear systems.

Extended Dynamic Mode Decomposition (EDMD) refers to a family of data-driven operator approximation methods developed for the spectral and modal analysis of general nonlinear dynamical systems. EDMD extends the standard Dynamic Mode Decomposition (DMD) framework by lifting finite-dimensional state data into a finite-dimensional subspace of observables (“dictionary functions”) and seeking a finite-dimensional approximation of the (infinite-dimensional) Koopman operator, which governs the evolution of observables along the flow. EDMD thus enables the extraction of spectral quantities—eigenvalues, eigenfunctions, and modes—associated with the Koopman operator, serving purposes such as model reduction, forecasting, and modal decomposition of complex, potentially high-dimensional and nonlinear systems.

1. Mathematical Formalism and Operator-Theoretic Foundations

The EDMD framework is motivated by Koopman operator theory, in which a (possibly nonlinear) discrete-time dynamical system

xn+1=F(xn)x_{n+1} = F(x_n)

is associated with the Koopman operator K\mathcal{K}, acting on a space F\mathcal{F} of observables g:RdCg:\mathbb{R}^d\rightarrow\mathbb{C} by composition: Kg(x)=g(F(x)).\mathcal{K}g(x) = g(F(x)). This operator is linear (but infinite-dimensional). The central goal in EDMD is to approximate K\mathcal{K} on a finite-dimensional subspace FD=span{ψ1,,ψN}\mathcal{F}_D = \mathrm{span}\{\psi_1, \ldots, \psi_N\} (the dictionary) using sampled trajectories. By collecting snapshot pairs {(xk,xk+)}k=1M\{(x_k, x_k^+)\}_{k=1}^M, where xk+=F(xk)x_k^+ = F(x_k), the method forms data matrices

ΨX=[ψ(x1),,ψ(xM)]andΨY=[ψ(x1+),,ψ(xM+)]\Psi_X = [\psi(x_1), \ldots, \psi(x_M)] \quad \text{and} \quad \Psi_Y = [\psi(x_1^+), \ldots, \psi(x_M^+)]

with ψ(x)=[ψ1(x),,ψN(x)]\psi(x) = [\psi_1(x), \ldots, \psi_N(x)]^\top. A finite-dimensional Koopman approximation K\mathbf{K} is defined via least squares: K=ΨYΨX+\mathbf{K} = \Psi_Y \Psi_X^{+} where ΨX+\Psi_X^{+} is the Moore–Penrose pseudoinverse of ΨX\Psi_X. The spectral information (Koopman eigenvalues, eigenvectors/functions, and modes) can then be extracted via the eigendecomposition of K\mathbf{K}.

2. Dictionary Design: Fixed, Kernel, and Learned Approaches

The choice of the dictionary {ψj}\{\psi_j\} critically determines the ability of EDMD to recover the underlying dynamics:

  • Fixed dictionaries: Polynomials, Fourier (trigonometric) modes, or radial basis functions are natural choices for analytic or smooth systems. For analytic expanding maps, approximating with trigonometric polynomials ensures exponential convergence of the approximation error as the degree grows (Wormell, 2023, Slipantschuk et al., 2019).
  • Kernel EDMD (kEDMD): The method can be recast in an RKHS framework, where the observables are canonical feature functions ϕx()=k(x,)\phi_x(\cdot) = k(x, \cdot), and the Koopman approximation acts as an interpolant in this space (Köhne et al., 27 Mar 2024). This circumvents explicit selection of basis functions and enables pointwise error control via kernel interpolation theory.
  • Machine-learned dictionaries: Recent work replaces a fixed dictionary with one parameterized by neural networks. These include multilayer perceptrons (Li et al., 2017), deep autoencoders (Alford-Lago et al., 2021), or neural ODEs (Terao et al., 2021), which are trained jointly with the Koopman approximation to adapt the basis to the data and minimize approximation error.

In highly nonlinear or high-dimensional systems, learned dictionaries have repeatedly been shown to reduce the number of required observables while improving accuracy and reconstruction quality.

3. Computational and Algorithmic Extensions

Several developments extend the computational scope and structural flexibility of EDMD:

  • Random kernel approximations: For large data or high-dimensional problems, the feature matrix may be constructed via data-independent random Fourier features (for translation-invariant kernels) or data-dependent Nystrom methods (DeGennaro et al., 2017). These approximations permit scalable implementations of EDMD/KDMD by compressing the feature space.
  • Tensor-based EDMD: For snapshot data with high spatial dimension or multiway structure (e.g., multidimensional grids), tensor-train decompositions preserve the multi-index structure and allow efficient computation by working with compressed representations (Klus et al., 2016).
  • Dealing with incomplete or quantized data: Extensions of EDMD based on the Mori–Zwanzig formalism incorporate memory effects due to unresolved variables using the t-model approximation (Katrutsa et al., 2022). Quantization in the inputs is shown to result in a regularization effect on the estimated operator in the large data regime (Maity et al., 19 Sep 2024).
  • Inhomogeneous and networked systems: xDMD introduces a bias term and residual update to handle inhomogeneous boundary conditions and source terms (Lu et al., 2020), while network DMDc builds local models for subsystems and combines them via the network topology (Heersink et al., 2017).
  • Structured symmetries and equivariance: For equivariant systems, group-convolutional EDMD imposes equivariance constraints, yielding convolutional matrix representations whose eigenanalysis and action can be efficiently computed via generalized Fourier transform (Harder et al., 1 Nov 2024).

4. Spectral Convergence, Accuracy, and Theoretical Guarantees

Rigorous analysis of EDMD reveals:

  • Convergence rates: For analytic expanding maps and trigonometric polynomial dictionaries, the Galerkin projection and thus the EDMD approximation error decays exponentially with dictionary size, provided the sampling measure is sufficiently smooth (analytic density or absolutely continuous w.r.t. Haar) (Wormell, 2023, Slipantschuk et al., 2019). The operator norm error IPK\|I-\mathcal{P}_K\| falls like ecKe^{-c K}, and the eigenvalue errors converge at similar rates.
  • Pointwise error bounds in kEDMD: For native spaces of compactly supported radial kernels (e.g., Wendland functions), the approximation error in the uniform norm decays as a power of the fill distance hh (e.g., Chk+1/2C h^{k+1/2}), as derived from kernel interpolation estimates (Köhne et al., 27 Mar 2024).
  • Accuracy assessment: Direct error metrics are available for individual Koopman eigenpairs, specifically quantifying the extent to which φ(F(x))μφ(x)\varphi(F(x))\approx\mu \varphi(x) holds on testing data, providing mode-by-mode accuracy guarantees without requiring knowledge of ground truth (Zhang et al., 2017).
  • Spectral pollution: Standard L2L^2 projections in EDMD may compromise the triangular structure of the exact Koopman operator, potentially introducing spurious (nonphysical) eigenvalues. Analytic EDMD utilizes Taylor projections and reproducing kernel Hilbert space structure to preserve block-triangular structure and guarantee absence of spectral pollution, with a spectrum faithful to the linearization at hyperbolic equilibria (Mauroy et al., 24 May 2024).

5. Applications: Model Reduction, Control, and Operator-Based Analysis

EDMD methods are widely used in diverse application areas:

  • Model reduction: In high-dimensional or infinite-dimensional systems (e.g., fluid flows), modal decompositions via EDMD yield low-dimensional surrogates and coherent structure identification, crucial for reduced-order modeling and data-driven simulation (Klus et al., 2016, DeGennaro et al., 2017, Lu et al., 2020).
  • Nonlinear control and MPC: By “lifting” nonlinear dynamics to a linear representation, EDMD enables direct application of linear model predictive control and stabilization techniques. Recent work establishes rigorous error bounds and practical asymptotic stability for EDMD-based surrogate models in MPC frameworks (Bold et al., 2023).
  • System identification and denoising: Online extensions, notably using Kalman filtering, allow simultaneous estimation of DMD parameters and denoising in real time, even in the presence of process and observation noise (Nonomura et al., 2018).
  • Regions of attraction and invariant sets: By leveraging Koopman eigenfunctions computed via EDMD, data-driven characterization and computation of regions of attraction and invariant manifolds in high-dimensional nonlinear systems become possible (Garcia-Tenorio et al., 2022).
  • Analysis of chaotic and ergodic systems: For maps with strong mixing or chaotic behavior, orthogonal polynomial EDMD provides accurate convergence to physically meaningful Ruelle–Pollicott resonances and supports robust forecasting and empirically validated modal decomposition (Wormell, 2023, Slipantschuk et al., 2019).

6. Practical Considerations, Limitations, and Ongoing Directions

EDMD’s applicability and performance depend strongly on:

  • Dictionary richness and invariance: The span of the dictionary must be as close to invariant under the true Koopman operator as possible for high-accuracy recovery of eigenfunctions and modal dynamics; adaptive learning of the dictionary via deep learning architectures is an active research direction (Li et al., 2017, Alford-Lago et al., 2021, Terao et al., 2021).
  • Computational scaling: Dimensionality and data volume pose significant challenges; randomized kernel approximations, tensor decompositions, equivariant constraints, and block-structured algorithms are powerful mitigations (Klus et al., 2016, DeGennaro et al., 2017, Harder et al., 1 Nov 2024).
  • Noise and quantization: Observation noise, process noise, and limited-precision data impact operator estimation but may be accommodated via regularization, Kalman filtering, or by accepting regularization as a feature in certain regimes (Nonomura et al., 2018, Maity et al., 19 Sep 2024).
  • Function space selection: The embedding space (e.g., Hardy-Hilbert, Sobolev, RKHS, or analytic function spaces) determines spectral properties, rates of convergence, and risk of spectral pollution (Slipantschuk et al., 2019, Wormell, 2023, Mauroy et al., 24 May 2024).
  • Symmetry and structure: Known symmetries (spatial, temporal, networked) can be encoded to improve sample efficiency, interpretability, and scalability (Harder et al., 1 Nov 2024, Heersink et al., 2017).

Ongoing challenges include rigorous convergence analysis for learning-based dictionaries, development of robust methods in the presence of strong nonlinearities or unresolved variables, and systematic approaches for dictionary enrichment and model selection. Extensions to continuous-time systems, stochastic processes, and parameter-varying or control-driven contexts remain prominent areas of research.


Summary Table: Key EDMD Variants and Their Distinguishing Features

Method Dictionary Structure Scalability Features Application Highlights
Standard EDMD Fixed (polynomial, Fourier) Model reduction, spectral analysis
kEDMD RKHS canonical features Kernel interpolation, pointwise error bounds Data-driven, error controllability
Tensor-based Multiway/TT representation Low-rank core computations Large-scale, multi-indexed data
DL-EDMD NN-parametric dictionary Autoencoder, NODEs Reduced basis, adaptivity
Random kernel Random Fourier, Nystrom Compression, block updates High-dimensional datasets
Convolutional Group equivariant (symmetry) Fourier diagonalization Fast computation, high dimension
Kalman-EDMD EKF-based online estimation trPOD for dimension reduction Online denoising, system ID
xDMD Residual and bias inclusion Inhomogeneous dynamics Nonhomogeneous PDEs, surrogate models
MZ-DMD Memory correction (Mori–Zwanzig) Gradient-based optimization Incomplete data, unresolved dynamics

EDMD and its variants represent a comprehensive, rigorous, and highly versatile set of methodologies for data-driven spectral analysis, forecasting, reduced-order modeling, and control of complex dynamical systems. The integration of operator-theoretic perspectives, computational innovations (kernels, tensors, deep learning), and structure-awareness (symmetries, inhomogeneities, networked systems) continues to broaden both the theoretical foundation and the real-world applicability of EDMD in the analysis and design of nonlinear dynamical systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)