Papers
Topics
Authors
Recent
2000 character limit reached

Weighted Dynamic Mode Decomposition (wtDMD)

Updated 27 November 2025
  • Weighted DMD is an extension of DMD that incorporates adaptive weightings on snapshots to improve convergence and handle nonstationary systems.
  • It reformulates the operator using weighted covariance matrices, enhancing spectral accuracy and robustness even with noise or spatial inhomogeneity.
  • Applications range from fluid dynamics to real-time forecasting, where tailored weights improve performance in systems with limited or evolving data.

Weighted Dynamic Mode Decomposition (wtDMD) is a family of algorithms that extends Dynamic Mode Decomposition (DMD) by incorporating weightings on snapshots—either in the time domain, data domain, or inner product structure. These weightings are motivated by the need to accelerate convergence of ergodic averages, adapt to time-varying or nonstationary systems, control influence of noise or spatial inhomogeneity, or impose problem-specific structure (such as physical mass matrices or noise covariance). wtDMD has significant impact on data-driven modeling, prediction, and modal analysis for dynamical systems, especially those with limited, nonuniform, or rapidly evolving datasets.

1. Mathematical Foundations and Classical DMD

Classical DMD operates on a sequence of state vectors (snapshots) {Xn}n=1N+1Rd\{X_n\}_{n=1}^{N+1}\subset\mathbb{R}^d associated with a dynamical system sampled at discrete, usually uniform, time intervals. The core data matrices are

X=[X1,X2,,XN]Rd×N,Y=[X2,X3,,XN+1]Rd×N.\mathbb{X} = [X_1, X_2, \dots, X_N]\in\mathbb{R}^{d\times N}, \qquad \mathbb{Y} = [X_2, X_3, \dots, X_{N+1}]\in\mathbb{R}^{d\times N}.

DMD seeks the best-fit linear operator ARd×dA\in\mathbb{R}^{d\times d} in the least-squares sense: Xn+1AXn,n=1,,N,X_{n+1}\approx A\, X_n\,, \quad n=1,\ldots, N, yielding the solution A=YXA = \mathbb{Y}\, \mathbb{X}^{\dagger}, where X\mathbb{X}^{\dagger} is the Moore–Penrose pseudoinverse. The eigendecomposition of AA, AV=VΛA V = V \Lambda, provides DMD modes (vjv_j) and associated spectral content (λj\lambda_j), corresponding to coherent spatial-temporal patterns and their dynamics (Bou-Sakr-El-Tayar et al., 21 Nov 2025).

2. Weighted Averages and Tapering

A critical limitation of classical DMD in practical scenarios is its reliance on equal-weighted, unwindowed averages, often leading to slow convergence, especially for ergodic or nearly periodic dynamics. Weighted Birkhoff averages replace uniform weights with a smooth taper w:[0,1]R+w:[0,1]\to\mathbb{R}_+, subject to

01w(s)ds=1,w(m)(0)=w(m)(1)=0m0.\int_0^1 w(s)\,\mathrm{d}s = 1, \quad w^{(m)}(0) = w^{(m)}(1) = 0\quad \forall m\geq 0.

Weights wk=w(k/N)w_k = w(k/N) de-emphasize trajectory endpoints. The weighted average for an observable gg becomes

WBN(g)(x0)=1αNk=0Nwkg(fk(x0)),αN=k=0Nwk.WB_N(g)(x_0) = \frac{1}{\alpha_N} \sum_{k=0}^{N} w_k\, g(f^k(x_0)), \quad \alpha_N = \sum_{k=0}^N w_k.

For smooth or periodic systems, this approach yields super-polynomial or exponential convergence in estimating statistical properties, outperforming uniform averaging, whereas in chaotic systems convergence is never worsened (Bou-Sakr-El-Tayar et al., 21 Nov 2025).

3. Weighted DMD Operator Formulation

wtDMD replaces uniform accumulation in the Gram-type (covariance) matrices with weighted accumulations: 1αNYwXw=1αNn=1Nw((n1)/N)Xn+1Xn,\frac{1}{\alpha_N} \mathbb{Y}_w \mathbb{X}_w^\top = \frac{1}{\alpha_N}\sum_{n=1}^N w((n-1)/N) X_{n+1} X_n^\top, where the weighted snapshots are defined as

Xw=XW1/2,Yw=YW1/2,\mathbb{X}_w = \mathbb{X}\, W^{1/2}, \quad \mathbb{Y}_w = \mathbb{Y}\, W^{1/2},

and W=diag(w0,...,wN)W = \operatorname{diag}(w_0, ..., w_N). The weighted DMD operator is obtained by

Aw=Yw(Xw)=(1αNYwXw)(1αNXwXw).A_w = \mathbb{Y}_w \, (\mathbb{X}_w)^\dagger = \Big(\frac{1}{\alpha_N} \mathbb{Y}_w\mathbb{X}_w^\top\Big) \, \Big(\frac{1}{\alpha_N} \mathbb{X}_w\mathbb{X}_w^\top\Big)^\dagger.

Spectral decomposition of AwA_w,

AwVw=VwΛw,A_w V_w = V_w \Lambda_w,

gives the weighted DMD modes and eigenvalues (Bou-Sakr-El-Tayar et al., 21 Nov 2025).

4. Extensions: High-Order, Online, and Inner-Product Weighting

wtDMD generalizes along several axes:

  • Exponential Forgetting for Time-Varying Systems: Assigning weights wi=ρdiw_i = \rho^{d_i}, where did_i is the delay of snapshot ii and ρ(0,1]\rho\in(0,1] the forgetting factor, supports adaptation to nonstationarity. The weighted cost

k=1TwkfkAfk122\sum_{k=1}^T w_k \|f_k - A f_{k-1}\|_2^2

is minimized by solving with appropriately weighted matrices, and can be updated online using recursive rank-1 updates (Zhang et al., 2017, Cheng et al., 2021).

  • High-Order Autoregressive (AR) Extensions: wtDMD naturally extends to systems with lagged and exogenous variables, stacking multiple time steps in the input matrix, and performing analogous weighted least-squares (Cheng et al., 2021).
  • Weighted Inner Product Spaces: DMD can be reformulated with a user-prescribed, Hermitian positive-definite weight WW defining the data space inner product, such as mass matrices from PDE discretization, noise covariance, or physically informed weights. The resulting Rayleigh–Ritz operator CW=(XWX)1(XWY)C_W = (X^* W X)^{-1} (X^* W Y) generalizes DMD to arbitrary weighted geometries (Drmač et al., 2017).

5. Algorithmic and Practical Implementation

The core procedural steps for non-online wtDMD are:

  1. Formulate X,Y\mathbb{X}, \mathbb{Y} from snapshots.
  2. Construct the diagonal weight matrix WW using the taper or other domain-appropriate criteria.
  3. Compute Xw\mathbb{X}_w, Yw\mathbb{Y}_w.
  4. Obtain the weighted pseudoinverse, typically via SVD.
  5. Form Aw=Yw(Xw)A_w = \mathbb{Y}_w (\mathbb{X}_w)^\dagger.
  6. Extract modes and eigenvalues via eigendecomposition of AwA_w.

For streaming/online data, rank-reduced projections are updated in closed form with key “core” matrices (e.g., PP, QXQ_X, QYQ_Y) updated recursively, and bases augmented as required to maintain representation of new dynamical directions (Cheng et al., 2021). For models with exponential forgetting or in online settings, weighted covariance matrices can be recursively updated, avoiding storage of past data and maintaining O(n2)O(n^2) computational complexity per step (Zhang et al., 2017).

6. Theoretical Guarantees and Empirical Performance

wtDMD inherits the Koopman-invariant subspace structure of classical DMD, but the convergence to the limiting operator AA_\infty is accelerated: AwA={O(Nm) m(super-polynomial, smooth/quasiperiodic), O(ecN)(exponential, analytic), O(1/N)(chaotic, same as unweighted case).\|A_w - A_\infty\| = \begin{cases} O(N^{-m})\ \forall m & \text{(super-polynomial, smooth/quasiperiodic)}, \ O(e^{-cN}) & \text{(exponential, analytic)}, \ O(1/N) & \text{(chaotic, same as unweighted case)}. \end{cases} In empirical studies, such as laminar cylinder wake flow at Re=100\mathrm{Re}=100, relative error Ew(N)E_w(N) for AwA_w is several orders of magnitude smaller than unweighted DMD E(N)E(N) for moderate to large NN (N200N\gtrsim 200), with leading Koopman eigenvalues also more accurate (Bou-Sakr-El-Tayar et al., 21 Nov 2025). In high-dimensional, sparse, or noise-contaminated contexts, low-rank truncation in weighted bases filters noise and improves robustness, especially when combined with online updates (Cheng et al., 2021).

7. Application Scenarios and Numerical Examples

wtDMD has proven effective in diverse settings:

  • Fluid Dynamics: Orders-of-magnitude acceleration in convergence of modal decompositions, improved spectral sharpness, and cleaner frequency recovery using physical mass-matrix weights (Bou-Sakr-El-Tayar et al., 21 Nov 2025, Drmač et al., 2017).
  • Transport Networks: High-order weighted DMD for metro OD-matrix forecasting with exponential weighting achieves consistent improvement and robustness over full retraining, with compact online updates (Cheng et al., 2021).
  • Real-Time and Time-Varying Systems: Online wtDMD with forgetting factors rapidly adapts to system changes while maintaining low variance when tuned appropriately (Zhang et al., 2017).
  • General Multivariate Time Series: Domain-informed weights (physical, statistical, or measurement-driven) allow problem-specific tailoring, improved numerical conditioning, and enhanced interpretability (Drmač et al., 2017).
Study/Domain Type of Weight Benefit
Fluid flows (Bou-Sakr-El-Tayar et al., 21 Nov 2025) Tapered Birkhoff Super-polynomial convergence
Metro transportation (Cheng et al., 2021) Exponential in time Adaptation to nonstationarity
Numerical PDE (Drmač et al., 2017) Mass matrix Physical energy norm, robustness

A common theme is the ease of implementation: modifying standard DMD code to include weighted averages or replace the data inner product is straightforward, whether for batch, recursive, or online computation.

8. Selection of Weights and Open Directions

Weight selection is problem-dependent:

  • Tapered weights: For stationary/smooth systems, choose bump functions vanishing at endpoints.
  • Exponential decay: For streaming/time-varying systems, set forgetting ratio ρ\rho according to desired memory decay (e.g., half-life).
  • Inner-product weights: Derive from discretization, measurement noise, or physical principles.

Open challenges include optimal tuning of weights for robustness and interpretability, joint spatial-temporal weighting for non-uniform sampling, efficient online updates for large-scale WW, and rigorous augmented backward/perturbation theory in the weighted setting (Drmač et al., 2017).

References

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Weighted Dynamic Mode Decomposition (wtDMD).