Deep Splitting Filter: Neural SPDE Solver
- Deep Splitting Filter are advanced numerical algorithms that combine operator splitting with neural network parameterization to solve SPDE-based nonlinear filtering problems.
- They decompose the filtering evolution into manageable prediction–update steps, enabling fast, online inference even in high-dimensional systems.
- The method mitigates the curse of dimensionality by approximating conditional densities via Monte Carlo sampling with controlled error rates.
A Deep Splitting Filter refers to a class of numerical and data-driven algorithms for nonlinear state estimation (Bayesian filtering) in continuous-time dynamical systems, which leverage operator splitting and deep neural network parameterizations to approximate the evolution of conditional probability densities governed by stochastic partial differential equations (SPDEs), primarily the Fokker–Planck or Zakai equation. Deep splitting filters—sometimes described as “mesh-free” neural SPDE solvers—are designed to mitigate the curse of dimensionality in classical particle and grid-based filtering schemes, enable fast online inference after offline training, and provide theoretically controlled error rates under broad regularity and Hörmander-type conditions.
1. Mathematical Foundations: SPDE-Based Filtering
At the core of nonlinear filtering is the evolution of the signal process governed by an SDE
and discrete or continuous noisy observations satisfying
The conditional density evolves according to the forward Fokker–Planck (Kolmogorov) equation between measurements,
where
and is updated at observation times by Bayes’ rule: For continuous-observation models, the Zakai SPDE arises: This setting includes classical benchmarks such as the Benes filter and covers general nonlinear filtering problems (Bågmark et al., 22 Sep 2024, Lobbe, 2022, Bågmark et al., 2022).
2. Deep Splitting Scheme: Operator Splitting and Neural Parameterization
The deep splitting methodology decomposes the SPDE generator into tractable sub-operators. One canonical approach is to split , with the drift-diffusion part and the first-order remainder encompassing variable coefficients and nonlinearity. The discrete-time prediction step leverages the Feynman–Kac formula over a timestep : where denotes the solution of the SDE started at . This expectation is approximated by Monte Carlo samples and parameterized via a deep neural network : with . Energy-based outputs enforce positivity and facilitate normalization. This paradigm generalizes naturally for Zakai-type SPDEs and permits recursive, sample-based training without spatial grids.
3. Prediction–Update Structure and Online Recursion
Deep splitting filters implement a two-stage prediction–correction architecture. The offline-trained neural networks advance the density in time via the split propagator (prediction step), simulating stochastic paths, and the exact Bayes multiplication realizes the update at data arrival. The normalization can be performed recursively: with normalization constant evaluated by Monte Carlo or quadrature. Notably, once trained, the filter computes instantaneous conditional densities and moments for arbitrary fresh observation paths, with no retraining required for new data (Bågmark et al., 22 Sep 2024, Lobbe, 2022, Bågmark et al., 2022).
4. Model Architectures, Training Procedures, and Domain Adaptation
Typical implementations use fully connected ReLU or tanh neural networks of moderate depth (e.g., 3–4 hidden layers, 64–128 units per layer) for density parameterization. Loss functions minimize SPDE residuals or mean-square errors between network outputs and splitting targets, using large batches of MC-sampled paths. In nonlinear/multimodal regimes, domain adaptation is crucial: at each prediction-update cycle, the spatial support of the network is recentered/rescaled according to means and variances of the posterior, ensuring mass coverage and mitigating drift—this is especially relevant for Benes-like models with highly nonstationary filtering densities. Monte Carlo sampling, automatic differentiation, and optimizers such as Adam are routinely employed (Lobbe, 2022).
5. Convergence Theory and Error Analysis
Under strong regularity and the parabolic Hörmander condition on the vector fields , deep splitting filters satisfy strong global convergence in for the density approximation, with local errors —proved using stochastic integration by parts and Malliavin calculus to control sample pathwise error propagation. The central-limit theorem and unbiased MC estimators ensure variance scaling as per substep. Empirical results confirm that error decays as for increasing temporal refinement in Ornstein–Uhlenbeck and bistable drift examples (Bågmark et al., 22 Sep 2024).
6. Computational Performance and Numerical Results
Benchmarks demonstrate the efficacy of deep splitting filters in low and moderate dimensions. For 1D Ornstein–Uhlenbeck and nonlinear bistable SDEs over typical time horizons with updates, networks trained offline (~2 hrs on RTX 3080 GPU per example) yield online inference at ms per trajectory. Evaluated error metrics include posterior mean error, density error to Monte Carlo references, probability mass retention, and normalization acceptance rates. Adaptivity improves posterior tracking and mitigates boundary loss; increasing network width/depth reduces error but raises sample requirements and runtime (Bågmark et al., 22 Sep 2024, Lobbe, 2022, Bågmark et al., 2022).
Performance table for a prototypical setting:
| Model | MAE vs True | -Density Error | Training Time/Step |
|---|---|---|---|
| OU (1D) | \textless0.02 | ~2 hrs | |
| Bistable (1D) | \textless0.05 | ~2 hrs | |
| Linear (20D) | Comparable PF (1000 particles), per-step inference | Amortized cost independent of | 16 hrs |
A plausible implication is that deep splitting filters can offer accuracy competitive with bootstrap particle filters while being computationally efficient, especially for high-dimensional problems where PF scales poorly.
7. Extensions, Generalizations, and Limitations
The deep splitting approach admits various generalizations: it extends to any filtering problem where the Zakai (or Fokker–Planck) equation applies and can accelerate mesh-free filtering in high-dimensional nonlinear systems such as atmospheric models. Further developments in energy-based parameterizations, higher-order splitting, adaptive time grids, and alternative training objectives (e.g., reverse KLD, noise-contrastive estimation) are feasible. Limitations include potential error accumulation in highly multimodal/posterior drift regimes, the need for tailor-made tail layers to ensure integrability, and the absence of rigorous convergence proofs for all variants outside core splitting steps. Training complexity increases for longer time horizons and larger state spaces, necessitating advances in scalable neural architectures and splitting schemes (Lobbe, 2022, Bågmark et al., 2022).
In summary, deep splitting filters constitute a rigorous, operator-theoretic framework for data-driven Bayesian filtering, combining SPDE splitting, neural approximation, recursive normalization, and theoretically controlled error—enabling scalable online inference in nonlinear, high-dimensional dynamical systems.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free