Linear Multistep Methods for Discovery (LMM-DMD)
- LMM-DMD is a data-driven framework that uses classical linear multistep methods and neural approximators to recover unknown vector fields in ODE models from sampled state data.
- It rigorously decomposes the overall error into discretization and network approximation components, with convergence guarantees based on consistency and stability criteria.
- The method has been validated on a range of systems—from linear oscillators to chaotic and high-dimensional biochemical networks—demonstrating broad applicability.
Linear Multistep Methods for Discovery (LMM-DMD) are a class of algorithms for data-driven inference of continuous-time dynamical systems from observed state data. LMM-DMD leverages classical linear multistep methods (LMMs) for discretizing time derivatives, combining them with expressive neural approximators—such as feedforward neural networks or Kolmogorov–Arnold networks (KANs)—to recover the vector fields governing unknown ordinary differential equations (ODEs). Rigorous error analyses underpin the approach, decomposing the discovery error into contributions from both numerical discretization and function approximation.
1. Mathematical Framework
The central objective of LMM-DMD is the recovery of an unknown vector field in the autonomous ODE
given a sequence of uniformly sampled state observations , with and step size .
A general -step LMM for approximating time derivatives has the form
where the coefficients and encode the specific numerical scheme: e.g., Adams–Bashforth (AB), Adams–Moulton (AM), or Backward Differentiation Formula (BDF).
The inverse problem is formulated by treating the sampled as fixed and seeking to identify (or its parametric approximation ) so that the LMM residuals are minimized or vanish: Parametric representations using neural architectures such as deep feedforward networks or B-spline-based Kolmogorov–Arnold networks (KANs) are adopted for . Auxiliary conditions, such as finite-difference initializations for at early time steps, may be imposed as necessary to guarantee a unique minimizer when the resulting system is under- or over-determined (Hu et al., 25 Jan 2025, Du et al., 2021, Keller et al., 2019).
2. Neural Network and Spline-Based Approximators
The LMM-DMD framework accommodates various function classes for :
- Feedforward Neural Networks: Deep fully connected networks (e.g., ReLU or tanh activations) parameterize . A typical architecture employs depth , uniform width , and nonlinearity :
The parameter count is (Du et al., 2021, Zhu et al., 2022).
- Kolmogorov–Arnold Networks (KANs): For increased approximation capacity and explicit error bounds, two-layer KANs with B-spline activation functions are constructed. Each scalar component is approximated as
where employs B-splines of degree and interior knots.
The approximation error of KANs is quantified in the uniform norm: where denotes a spline-induced Lipschitz constant, and is the modulus of continuity of (Hu et al., 25 Jan 2025).
3. Discretization, Consistency, and Stability in Discovery
The convergence theory for LMM-DMD diverges from that of forward integration. The defining LMM system for the inverse problem is
with data . Consistency and stability criteria for convergence in the discovery context differ as follows (Keller et al., 2019):
- Consistency: The local defect on the trajectory must satisfy , where is the LMM order.
- Stability: The root condition for the second characteristic polynomial governs the stability and conditioning of the recovery of . The strong root condition (all roots inside the unit disc) ensures bounded error growth with .
- Convergence Theorem: If the second polynomial satisfies the strong root condition and the method is consistent, then
Analysis for specific schemes yields:
| Scheme | Orders with Discovery Convergence | Root Condition |
|---|---|---|
| Adams–Bashforth | Strong root holds | |
| Adams–Moulton | only | unstable |
| BDF | Always stable |
(Keller et al., 2019, Du et al., 2021)
4. Error Bounds: Approximation and Discretization Splitting
The LMM-DMD error admits an explicit decomposition into discretization and learning (approximation) terms: where is the best-approximation error achievable by , is the condition number of the LMM matrix (uniformly bounded for zero-stable schemes), and is independent of and network size (Du et al., 2021, Hu et al., 25 Jan 2025).
For KANs, uniform-norm error can be further bounded via the Kolmogorov-superposition theorem and explicit B-spline regularity. Provided that is Hölder continuous with exponent ,
The solution-trajectory error for the learned , by Grönwall's inequality, satisfies
Error analyses using inverse modified differential equations (IMDE) further show that the total approximation error on compact domains can be expressed as
where is the learning (training) loss (Zhu et al., 2022).
5. Representative Numerical Experiments
LMM-DMD has been systematically validated across a range of ODE benchmarks:
- Linear and Nonlinear Oscillators: Recovery of simple, damped, and cubic oscillators using both explicit (AB) and implicit (BDF, AM) LMMs. Empirically, fitted-field errors decay at with decreasing step size until limited by approximation errors, precisely as predicted by theory.
- Lorenz '63 (Chaotic): On canonical chaotic trajectories, both Adams–Bashforth and BDF methods of orders $1$–$4$ yield convergence rates matching the multistep order for sufficiently small . For long prediction horizons, errors grow exponentially due to positive Lyapunov exponents.
- Biochemical Networks (e.g., Glycolytic Oscillator, ): Using AM(1) with KAN (, , ), the method accurately recovers all species time-series in both training and extrapolation regions, retaining small sup-norm errors.
In all cases, the observed grid errors mirror theoretical predictions, exhibiting polynomial decay in and saturation at the neural approximation threshold. BDF and low-order AB schemes are especially robust in high-dimensional or stiff regimes (Hu et al., 25 Jan 2025, Du et al., 2021, Keller et al., 2019, Zhu et al., 2022).
6. Practical Considerations, Limitations, and Recommendations
Robust application of LMM-DMD mandates attention to several practical aspects:
- Scheme Selection: Only LMMs whose second characteristic polynomial satisfies the strong root condition ensure stability for discovery. BDF methods (any order) and AB up to sixth order are safe. AM is limited to AM-0 (implicit Euler) and AM-1 (trapezoid rule).
- Auxiliary Initial Conditions: Over- or under-determined linear systems may require finite-difference approximations for initialization, especially for explicit methods such as AB; BDF methods are less dependent on such conditions.
- Grid Step Size and Network Capacity: To ensure error is dominated by discretization rather than network approximation, one should choose so that exceeds the learning floor, with networks sufficiently wide/deep as needed ().
- Analyticity and High-Frequency Modes: Theoretical guarantees rely on sufficient regularity (analyticity) of both the true and learned vector fields. Practical implementations benefit from the implicit spectral bias of SGD, but explicit regularization may be required for stiff or highly oscillatory systems.
- Limitations: High-dimensional systems and stiff or multiscale dynamics increase the required data and network size. Large or in KANs/Kolmogorov–Arnold approaches may be computationally expensive. Theoretical results assume noiseless, densely sampled trajectories, and extension to noisy or incomplete data remains an open area.
LMM-DMD provides a sharp analytic and algorithmic foundation for data-driven ODE discovery, integrating high-order numerical schemes and expressive neural approximators with rigorous a priori error control (Hu et al., 25 Jan 2025, Du et al., 2021, Keller et al., 2019, Zhu et al., 2022).
7. Research Directions and Applications
The LMM-DMD framework is broadly applicable across domains where the governing dynamical law is unknown, but state trajectories can be sampled:
- System Identification: Physics, biology (e.g., cellular signaling networks, gene regulatory dynamics), engineering (control systems), and neuroscience (population dynamics).
- Model Discovery with Noisy Data: While the canonical theory addresses noiseless data, extensions incorporating regularization or robust statistics are becoming increasingly relevant.
- PDE Discovery and Multi-Scale Systems: Extensions that incorporate spatial discretization, run multi-stage or multirate schemes, or recover partial differential equations are active research areas.
- Comparison and Integration with Other Methods: LMM-DMD complements approaches such as Runge–Kutta discovery or sparse regression (e.g., SINDy), providing high-order accuracy and error bounds conditioned on neural-network representational power.
LMM-DMD thus constitutes a rigorous, versatile, and high-accuracy paradigm for the inverse modeling of continuous-time dynamics from time-series data, with proven convergence theory and broad relevance across scientific domains.