Gaussian Bayesian Network EKF
- GBN-EKF is a non-linear state estimation approach that integrates Gaussian Bayesian Networks with the Extended Kalman Filter to address stiffness and ill-conditioned measurements.
- Its methodology replaces global matrix inversions with local scalar operations, significantly improving numerical stability for stiff dynamical models.
- Comparative analysis shows that GBN-EKF outperforms classical EKF, UKF, and CKF by reducing RMSE and effectively managing near-singular measurement covariances.
The Gaussian Bayesian Network-based Extended Kalman Filter (GBN-EKF) is a non-linear state estimation methodology for continuous–discrete stochastic systems characterized by stiffness and ill-conditioned measurements. GBN-EKF advances the Extended Kalman Filter (EKF) through Gaussian Bayesian Network (GBN) formalism, enabling robust recursive state estimation without matrix inversion during measurement update steps. This yields enhanced stability for stiff dynamical models and singular or near-singular measurement covariances. The approach is presented and analyzed in the context of systems where traditional Cubature (CKF) and Unscented (UKF) Kalman Filters are numerically destabilized, particularly under ill-conditioned measurement scenarios (Behera et al., 4 Nov 2025).
1. Problem Statement and System Formulation
The systems of interest satisfy:
where:
- is the state,
- is Brownian motion with covariance ,
- are measurements at discrete times ,
- is Gaussian measurement noise.
Stiffness corresponds to the Jacobian having large positive eigenvalues, yielding rapid mode dynamics. Ill-conditioned measurements manifest when approaches singularity, compromising the numerical stability of classical filtering updates. The goal is to estimate state trajectories using EKF principles restructured as a GBN, explicitly eliminating matrix inversion steps.
2. Probabilistic Structure and Gaussian Bayesian Network Representation
The prediction and correction steps are framed through joint Gaussian distributions: with: The joint covariance is decomposed via a recursive regression structure: where parameters are obtained through:
Conditioning on measurements () proceeds via arc reversals and local updates, never requiring the inversion of or any of its constituent blocks.
3. Filter Recursion and Update Dynamics
3.1 Time Update (Prediction)
State mean and covariance are propagated by the standard matrix differential equations (MDEs):
where at . Integration over yields the predicted state and covariance.
3.2 Linearization and Measurement Model
At update step , the measurement function is linearized:
3.3 EKF and GBN-EKF Update Contrast
| Step | Conventional EKF | GBN-EKF Approach |
|---|---|---|
| Update Formula | Involves (matrix inversion) | Uses arc reversal on |
| Numerical Stability | Sensitive to ill-conditioning | Robust, no inversion |
| Computation Type | Matrix operations | Scalar multiplies, adds/divides |
In EKF:
- Update:
In GBN-EKF, the update proceeds in space:
Augmentation:
Conditioning:
Sequential arc reversals for each measurement dimension, for parent child : After full conditioning, recover posterior mean and covariance: All updates are locally scalar, entirely free of global matrix inversions.
4. Algorithmic Implementation
The following steps summarize the recursive application:
1 2 3 4 5 6 7 8 9 10 |
Input: x̂₀|₀, P₀|₀
for k = 1, ..., K do
1) Time-update: integrate prediction MDEs → x̂ₖ|ₖ₋₁, Pₖ|ₖ₋₁
2) Linearize: compute Hₖ, hₖ|ₖ₋₁
3) Convert Σₖ|ₖ₋₁ to (B, V) via forward recursions
4) Form augmented (B_aug, V_aug) with Hₖ, Rₖ
5) For each measurement dimension j:
a) Reverse arc(s) to condition x on zₖ using updates above
6) Recover (x̂ₖ|ₖ, Pₖ|ₖ) from updated (B', V')
end for |
Each arithmetic operation is limited to scalar addition, multiplication, and division by regression coefficients, ensuring robustness even for highly ill-conditioned measurement matrices.
5. Numerical Stability and Conditioning Advantages
Matrix inversion in the conventional EKF update step amplifies rounding errors when the innovation covariance is nearly singular. GBN-EKF circumvents this issue by decomposing global updates into local conditioning steps:
- Each local operation involves inverting a scalar regression coefficient and adding variances.
- These distributed updates propagate measurement information gently, preserving positive semi-definiteness of the posterior covariance.
- Even when is nearly singular, propagation of conditioning is stabilized by the architecture of the GBN.
This mechanism demonstrably avoids catastrophic numerical errors and breakdowns associated with stiff, ill-conditioned measurement updates.
6. Comparative Performance Analysis
6.1 Root Mean Squared Error (RMSE) Metric
Average RMSE (ARMSE) over Monte Carlo runs and steps:
6.2 Empirical Evaluation Summary
- Dahlquist-type SDE ():
- Linear regime (): all filters tie.
- Nonlinear regime (): EKF/GBN UKF/CKF at small ; CD-UKF/CKF slightly better until large stiffness.
- Van der Pol Oscillator (), well-conditioned:
- CD-EKF and CD-GBN-EKF ARMSE at .
- UKF/CKF fail for .
- Van der Pol, ill-conditioned ():
- CD-GBN-EKF ARMSE
- CD-EKF ARMSE
- Classical EKF ARMSE increases by factor
- GBN-EKF remains stable.
6.3 Stability Analysis
- Covariance propagation () in stiff regimes induces rapid eigenvalue growth for EKF.
- UKF/CKF incorporate higher-order terms , which are stabilizing when but exacerbate instability for stiff () systems.
7. Graphical Model Architecture and Information Flow
State evolution and measurement updates are represented as a directed acyclic graph (DAG):
1 2 3 |
xₖ₋₁ → xₖ → h(xₖ) → zₖ
↑ ↑
wₖ vₖ |
- : process model (dynamics)
- : measurement arc, linearized by
- : process noise injection
- : measurement noise injection
Arc reversals during evidence absorption convert to a root node with observed value, distributing the evidence to and recursively updating . Each measurement dimension undergoes this process independently.
8. Summary and Context
GBN-EKF preserves EKF's first-order accuracy in stiff dynamical regimes while eliminating vulnerable matrix inversion steps during measurement updates. Its core innovation is the use of distributed, scalar local conditioning steps within a Gaussian Bayesian Network architecture, resulting in dramatically improved numerical stability and robust state estimation under ill-conditioned measurement models. Comparative evaluations establish its advantage over classical EKF, UKF, and CKF in scenarios susceptible to numerical breakdown, particularly with near-singular innovation covariances and stiff system dynamics (Behera et al., 4 Nov 2025).