Stochastic NODE-DMD: Probabilistic Dynamics
- The paper introduces a probabilistic framework that extends DMD by integrating neural ODEs and latent SDEs to model latent nonlinear dynamics and quantify uncertainty.
- It leverages an implicit neural decoder and variational inference to reconstruct continuous spatiotemporal fields from sparse and noisy sensor data.
- Empirical results demonstrate improved mode recovery, accurate eigenvalue estimation, and enhanced spatial resolution flexibility compared to classical methods.
Stochastic NODE-DMD is a probabilistic framework for learning the dynamics of partially observed, continuous spatiotemporal systems from sparse sensor data. It extends Dynamic Mode Decomposition (DMD) by combining linear spectral interpretability with the nonlinear modeling capacity of Neural Ordinary Differential Equations (NODEs) and uncertainty quantification through a latent stochastic differential equation (SDE) formulation. The method enables continuous-time, continuous-space field reconstruction with rigorous predictive uncertainty estimates, interpretable dynamical modes, and the recovery of the underlying linear and nonlinear structure from severely under-sampled and noisy observations (Kim et al., 25 Nov 2025).
1. Latent Nonlinear Dynamics and Stochastic Formulation
The backbone of Stochastic NODE-DMD is a latent evolution law for the mode amplitudes , capturing the temporal dynamics in a reduced dimension. The dynamics are governed by a continuous-time SDE:
where:
- is a diagonal matrix containing classical DMD eigenvalues,
- is a neural network parameterized by , modeling nonlinear residual drift,
- is the process noise variance,
- is complex Brownian motion.
This stochastic NODE recovers classical, linear DMD as the limiting case (i.e., and ). The neural residual is modeled by an MLP (3–4 layers, 64–128 units, ELU activation), equipped with regularization via weight decay or spectral-norm constraints on Jacobians to control expressiveness and prevent overfitting.
2. Observation Model and Field Reconstruction
Field reconstruction from latent dynamics is achieved through a neural implicit decoder—termed “Mode Extractor” . Given a spatial coordinate , a positional encoding
is computed and fed into a neural network to yield . The field mean at time is reconstructed as:
For sensors , observations are modeled as
yielding a complex Gaussian likelihood:
3. Variational Inference and Training Objective
The intractable posterior over latent trajectories is approximated by a factorized variational distribution, with an amortized encoder providing the initial posterior:
where Encoder.
Temporal evolution of uncertainty is handled via uncertainty-aware Euler–Maruyama integration:
- Means: ,
- Covariances:
with .
The evidence lower bound (ELBO) per batch is:
involving:
- A Gaussian NLL reconstruction term,
- Latent KL divergence against ,
- A consistency loss aligning encoder and SDE-propagated marginals (using mean square error and a small KL term with weight ).
Total loss weights are typically , , .
4. Extraction of Dynamical Structure and Spectral Factors
After training, and encode both the nonlinear and linear dynamics. The local linearization at a nominal state yields a Jacobian . The eigenvalues of correspond to continuous-time DMD eigenvalues, and the right eigenvectors are latent-space Koopman modes. Spatial mode functions are reconstructed as .
For classical comparison, discrete DMD eigenvalues are obtained via , where . Empirical mode and eigenvalue recovery is quantifiable by comparing to ground truth, with synthetic experiments using the log-ratio estimator:
5. Uncertainty Quantification and Continuous Spatiotemporal Queries
Stochastic NODE-DMD enables principled uncertainty quantification. Samples from the posterior are propagated through the neural SDE to generate ensemble predictions , with empirical variance reflecting both epistemic and aleatoric sources. Optionally, a Laplace approximation around MAP can quantify parameter uncertainty in .
The implicit decoder enables field reconstruction at arbitrary , affording spatial resolution refinement without retraining.
6. Algorithmic Steps and Training Protocol
The method comprises the following core steps:
- Encode and to obtain .
- Extract mean or sample from .
- Integrate the latent SDE (Euler–Maruyama) to get .
- Decode using ; compute NLL.
- Accumulate latent KL and consistency losses.
- Backpropagate through the sequence and update to maximize the ELBO.
- Employ curriculum learning starting with teacher forcing, annealed to full autoregressive prediction.
Critical hyperparameters include mode rank (4 for synthetic, 8 for PDE flows), positional encoding bands , process noise (tuned per dataset), the specified loss weights, Adam optimizer (), batch size 16, and training for epochs.
7. Benchmark Results and Structural Properties
Empirical evaluation demonstrates significant advantages in reconstructing spatiotemporal fields from sparse and noisy data:
- Synthetic sequence (r=4, 32×32, T=50, 10% sensors):
- Recovered modes exhibit cosine similarity to true modes.
- Continuous eigenvalue error versus NDMD's $1.78$.
- Gray–Scott (r=8, 100×100, T=100):
- 1-step error vs. NDMD with 10% sensors.
- 2D Navier–Stokes vorticity (r=8, 100×100, T=50):
- 1-step error , NDMD .
- Cylinder flow (r=8, 128×128, T=150):
- 1-step error , NDMD .
The method demonstrates calibrated uncertainty: when trained on multiple realizations, it learns a distribution across latent trajectories matching ensemble variability, avoiding regression to the mean.
Spatial resolution flexibility is evidenced by increased error of when queried at both coarser or finer resolutions than those used for training, attributable to the continuous-space implicit neural parameterization.
Stochastic NODE-DMD achieves an overview of DMD's interpretability with neural ODE expressivity and Bayesian rigor, delivering continuous spatiotemporal prediction and uncertainty quantification from sparse, noisy sensors (Kim et al., 25 Nov 2025).