LaSDI Taylor Latent ODE Surrogate
- The paper introduces a Taylor-expanded latent ODE framework that integrates ResNet modules and sparse SINDy regression to efficiently model parameterized PDE dynamics.
- It employs latent space embeddings and multi-stage neural architectures to achieve high-accuracy state reconstruction with relative L2 errors under 2% across varied parameter regimes.
- The surrogate offers significant speedups, mesh-independence, and flexibility, ensuring scalable and interpretable emulation of complex dynamical systems.
A LaSDI Latent ODE Surrogate is a reduced-order modeling framework that leverages latent space embeddings and explicit ordinary differential equation structures to construct fast, interpretable surrogates for parameterized, high-dimensional dynamical systems, especially those governed by partial differential equations (PDEs). The LaSDI (Latent Space Dynamics Identification) paradigm employs nonlinear mappings to encode high-fidelity simulation data into a low-dimensional latent space, where the dynamics governing system evolution are modeled by ODEs with the possibility of parameter dependence. Recent advances have established a wide spectrum of LaSDI surrogates, including variants dominated by Taylor-series expansions and residual networks (e.g., P-TLDINets), methods incorporating uncertainty quantification via Gaussian process interpolation (e.g., GPLaSDI), multi-stage architectures (mLaSDI), and robust weak-form approaches (WLaSDI). This article provides a comprehensive technical account emphasizing the P-TLDINet/LaSDI-Taylor style latent-ODE surrogate, as exemplified in "Parametric Taylor series based latent dynamics identification neural networks" (Lin et al., 2024).
1. Core Model Architecture and Latent ODE Formulation
At the heart of the LaSDI Taylor surrogate lies a set of neural network modules jointly trained to encode, propagate, and decode system states:
- Latent State and Parametric Mapping: The primary latent state, , is a low-dimensional embedding (, where is the full system dimension), with explicit dependence on the parameter vector .
- Time Evolution via Taylor-Expanded ODE: The evolution of is governed by a parametric ODE:
which is approximated by its truncated Taylor series up to order :
where in practice suffices, and are network outputs representing derivatives up to the th order.
- Architecture:
- NN_dyn: A multi-layer ResNet-based FCNN outputs the set of time derivatives.
- NN_: A compact FCNN maps parameters to initial latent states.
- NN_rec: A ResNet-augmented FCNN reconstructs high-dimensional states on arbitrary spatial points.
ResNet skip-connections enable these components to implement both nonlinear Taylor expansion terms and learned integration steps, with network flexibility for representing complex latent flows.
2. Training Methodology and Joint Loss Structure
Training proceeds with simultaneous optimization of all submodules—NN_, NN_dyn (including implicit Taylor coefficients), NN_rec, and an auxiliary sparse regression matrix—against a composite loss:
- : Enforces accurate reconstruction of high-dimensional fields:
- : Ensures correct initial condition mapping via NN_rec(NN_.
- : Couples the learned latent time derivative, , to a SINDy-style sparse regression form:
where is a library (constant, linear terms), and is jointly trained.
- coefficients: Control relative weighting; regularizes .
All latent map parameters and sparse-regression coefficients are optimized via standard mini-batch or full-batch stochastic optimization.
3. Parametric Generalization and KNN-IDW Interpolation
To enable generalization over parameter space, each training point receives an associated identified coefficient matrix . For unseen parameters :
- Neighbor Search: Find nearest neighbor training points under Euclidean distance.
- Inverse-Distance Weights:
- Coefficient Interpolation:
This creates a continuous and locally adaptive latent dynamics model, using the interpolated coefficients within the ODE
Time propagation then utilizes either classical integrators (e.g., RK4) or the neural Taylor-expansion integrator.
4. Surrogate Evaluation: Accuracy, Speed, and Mesh Independence
The latent ODE surrogate is quantitatively evaluated using the relative error: Benchmark studies show that, for both 2D Burgers and lock-exchange flow problems, P-TLDINets maintain on test parameter regimes beyond training coverage. Training is rapid: e.g., 36 minutes for 25 training points (Burgers), 57 s online inference for 225 points, with orders-of-magnitude (94× offline, 100× online) speedup compared to GPLaSDI/gLaSDI and high-fidelity solvers.
The decoder NN_rec supports arbitrary spatial discretizations, enabling prediction on grids ranging from or uniform meshes to unstructured $1278$–$4900$ node configurations, without retraining—errors remain within the 1–2% regime.
5. Interpretability, Flexibility, and Comparison to Classical LaSDI
Unlike classical LaSDI pipelines, which require explicit autoencoder construction for nonlinear encoding/decoding, this Taylor/ResNet approach is lightweight and grid-independent. The inclusion of a sparse-regression SINDy step for the first-order latent dynamics preserves interpretability (the right-hand side of the ODE is a linear combination of known basis functions with learned coefficients). The absence of strict autoencoder modularity circumvents pitfalls in high-frequency or highly nonlinear data regimes, promoting stability, accuracy, and easier optimization.
Additionally, the parametric KNN-IDW interpolation architecture allows seamless adaptation over broad parameter domains, with smooth generalization even when autoencoder-based methods lose latent stability.
6. Online Inference Pipeline and Deployment
The online prediction pipeline executes as follows:
- Compute via KNN-IDW.
- Set NN_.
- Iteratively update:
- For each and spatial , predict NN_rec.
The entire stack provides an adaptable latent-ODE surrogate for parametric PDEs with mesh-independence and interpretable dynamic structure. The framework's generality extends to nonlinear systems, multi-physics, and high-dimensional parameterizations, as explored in recent theory and applications (Lin et al., 2024).
Summary Table: Key Structural Components of the LaSDI Taylor Latent ODE Surrogate
| Module | Function | Model Type |
|---|---|---|
| NN_dyn(, ) | Outputs (time derivatives) | ResNet-based FCNN |
| NN_) | Maps parameters to latent initial condition | FCNN |
| NN_rec(, , ) | Reconstructs field from latent and parameter | ResNet-augmented FCNN |
| SINDy-style sparse coefficient | Enforces sparse interpretable structure in first derivative | Linear regression |
| KNN-IDW () | Interpolates from training coefficients for new parameters | Nonparametric |
For additional methodological context and latent-ODE algorithmic variants (classical LaSDI, GPLaSDI, mLaSDI, WLaSDI), see (Bonneville et al., 2024, Bonneville et al., 2023, Anderson et al., 10 Jun 2025, Tran et al., 2023). The LaSDI Taylor (P-TLDINet) approach fundamentally expands the tractable regions of parametric PDE surrogate modeling, offering a scalable, interpretable, and highly efficient solution for scientific emulation tasks.