Approx. Bayesian Filter via BSDEs
- The paper introduces an approximate Bayesian filtering paradigm that reformulates the Fokker–Planck PDE as a nonlinear Feynman–Kac representation using backward stochastic differential equations.
- It leverages deep BSDE solvers where neural networks approximate the backward process and its gradient, enabling scalable and efficient filtering in high-dimensional nonlinear settings.
- Empirical results on Ornstein–Uhlenbeck and bistable processes validate mixed error bounds, demonstrating convergence rates aligned with Euler–Maruyama discretization requirements.
An approximate Bayesian filter based on backward stochastic differential equations (BSDEs) is a class of nonlinear filtering algorithms in which the evolution of the conditional (typically unnormalized) filtering density is represented via a nonlinear Feynman–Kac formula, leading to a forward–backward SDE system. By leveraging deep learning for the numerical approximation of the BSDE solutions, this framework achieves efficient and scalable nonlinear filtering, particularly in high-dimensional and nonlinear settings (Bågmark et al., 14 Aug 2025).
1. Nonlinear Feynman–Kac Representation of the Filtering Density
In the classical continuous–discrete filtering context, the state density between observations satisfies the Fokker–Planck (Kolmogorov forward) equation. The core methodological advance is to re-express this PDE as a backward stochastic differential equation, yielding a nonlinear probabilistic representation for the filtering density. For the time interval , the prediction step is reformulated as
where is the forward process started at at time , encodes the updated density at the last observation, and entails terms derived from the drift, diffusion, and adjoint operator structure of the Fokker–Planck PDE. Crucially, the evolution of the density is recast as in a coupled FBSDE system:
- Forward SDE: ,
- Backward SDE:
This probabilistic reformulation (nonlinear Feynman–Kac) is the foundation for subsequent numerical and machine learning approximations of the filter.
2. Deep BSDE Solver: Network-Based Approximation
The deep BSDE method brings neural networks to BSDE solution of the filtering density prediction step. The backward component and its gradient in (corresponding to ) are approximated by neural networks , with denoting network weights.
The numerical implementation discretizes the SDE using Euler–Maruyama for the forward path and projects the backward recursion onto neural network bases. The optimization minimizes the expected squared difference between the backward process simulated terminal output and the known terminal value (from the density at the new measurement):
subject to the discrete-time dynamics:
The hierarchical structure allows the networks to absorb both state and observation sequence up to , and effectively learn an adaptable density propagator.
3. Offline Training and Online Sequential Application
The methodology is divided between an offline training phase and online application.
Offline phase: Simulate many forward trajectories and corresponding observation histories. For each interval , train the networks to minimize terminal error for a range of starting points and observation sequences. All intensive computation occurs offline; network parameters representing the mappings , are fixed after training.
Online phase: When presented with real data, use new observations to update the current density by multiplying the predicted density by the observation likelihood, then proceed to the next time interval using the pre-trained deep BSDE network. The update at time is
where is the likelihood and is the neural network output.
This structure enables rapid, real-time filtering with the computational effort front-loaded in the offline stage.
4. Error Analysis: Mixed A Priori–A Posteriori Bounds
A distinguishing feature of this BSDE-based filtering paradigm is the derived mixed error bound quantifying both time discretization and network approximation error. For each time step , the error in density is
where is the time-discretization step and the expectation quantifies the residual network error over simulated terminal states. The bound is "mixed" because it is both a priori (from discretization) and a posteriori (from empirically realized neural network fitting accuracy). Under standard smoothness and ellipticity conditions, the theoretical convergence rate in time-discretization is , matching known BSDE and Euler–Maruyama rates.
5. Numerical Illustration: Empirical Validation
Two example systems underscore the practical convergence and accuracy:
Ornstein–Uhlenbeck process: For with Gaussian observations, the benchmark is the Kalman filter. The pointwise and residual errors of the deep BSDE filter, evaluated both in time and over state, decrease as with increased discretization ( time intervals), matching the theoretical predictions.
Bistable process: For the drift (double-well), the system is fundamentally nonlinear, the density is bimodal, and no closed-form filter exists. The deep BSDE filter is compared to a high-resolution particle-KDE reference. The empirical convergence rate approaches until reaching the limit of neural network training or sampling error.
These experiments confirm the mixed error bound and demonstrate that, with adequate offline training, the approach robustly propagates complex, highly non-Gaussian densities through the filtering recursion.
6. Context and Implications
This filtering framework situates itself in a rapidly developing direction at the intersection of stochastic analysis, numerical methods for SDEs, and machine learning. Use of the nonlinear Feynman–Kac representation and deep BSDE solvers permits expressive, nonparametric approximations to filtering densities, scalable to high dimension and accommodating strong nonlinearities. The offline–online split provides efficiency when real-time filtering is required.
The method is closely related to advances in deep BSDE solvers for PDEs and FBSDEs, extending them to the recursive Bayesian filtering context by leveraging probabilistic representations of Kolmogorov and Fokker–Planck equations, and integrating neural network-based regression for unnormalized density approximation. The mixed a priori–a posteriori error analysis mirrors contemporary deep learning theory, combining classical numerical rates with function approximation contingency.
Empirical performance, as illustrated in both linear and nonlinear canonical filtering problems (Bågmark et al., 14 Aug 2025), supports the theoretical rates. The approach generalizes the propagation step in nonlinear filters, unifying PDE- and SDE-based perspectives and offering a competitive alternative to particle filters and kernel-based density estimation, especially where computational constraints favor offline–online architectures.