Papers
Topics
Authors
Recent
2000 character limit reached

Confidence-Weighted Extended Kalman Filter

Updated 1 December 2025
  • Confidence-Weighted EKF is a nonlinear estimation technique that integrates adaptive state covariance updates to quantify uncertainty in the presence of process and observation noise.
  • It leverages local linearization via Jacobians to propagate both mean estimates and confidence measures, making it efficient for applications like deep neural networks and sensor fusion.
  • Enhanced through learned calibration maps, the filter corrects overconfident covariance estimates to ensure robust online optimization and accurate uncertainty intervals.

A Confidence-Weighted Extended Kalman Filter (EKF) integrates explicit uncertainty quantification into nonlinear estimation, consistently adjusting confidence estimates throughout inference. In canonical EKF settings—including uncertainty propagation in deep neural networks, online stochastic optimization, and sensor fusion—such filters maintain a state covariance that encodes the algorithm’s local confidence, adapting this quantity through analytic models, data-driven calibration, or a combination of both. Confidence-weighted EKFs thus systematically account for process noise, observation noise, and model misspecification, providing both point estimates and credible covariance intervals at every inference step (Titensky et al., 2018, Tsuei et al., 2021, Vilmarest et al., 2020).

1. Mathematical Foundations of the EKF with Confidence Weighting

The EKF generalizes the linear Kalman filter to nonlinear dynamical systems and observations by locally linearizing the nonlinear mappings at each recursion. The mean and covariance updates propagate not just the expected state but also a covariance (the “confidence weight”) that encodes uncertainty about the estimate. The canonical discrete-time model comprises:

  • State propagation: xk=f(xk1,uk1)+νkx_k = f(x_{k-1}, u_{k-1}) + \nu_k, with process noise νkN(0,R)\nu_k \sim \mathcal{N}(0, R).
  • Measurement update: yk=h(xk)+wky_k = h(x_k) + w_k, with measurement noise wkN(0,Q)w_k \sim \mathcal{N}(0, Q).

The EKF maintains estimates x^k\hat x_k (mean) and P^k\hat P_k (covariance). The confidence in x^k\hat x_k is reflected in the eigenstructure of P^k\hat P_k, which is recursively updated by projecting through the local Jacobians of ff and hh and by including noise covariances RR, QQ. The covariance update also acts as a per-dimension adaptive learning rate: low-variance dimensions (high-confidence) admit smaller corrections (Vilmarest et al., 2020).

2. Confidence-Weighted EKF in Deep Neural Networks

The methodology of (Titensky et al., 2018) recasts a feed-forward deep neural network (DNN) as a discrete-time nonlinear dynamical system, with each layer corresponding to a “time step” and each activation vector xx_\ell the “state.” Input uncertainty—assumed Gaussian with mean μ0\mu_0 and covariance Σ0\Sigma_0—is propagated through nonlinear layers via the following confidence-weighted EKF recursion:

  • Initialization: P0=Σ0P_0 = \Sigma_0 (input uncertainty).
  • Prediction:
    • x=f(Wx1+b)x_\ell = f(W_\ell x_{\ell-1} + b_\ell), with ff the elementwise ReLU.
    • F=x/x1F_\ell = \partial x_\ell / \partial x_{\ell-1}, where F(i,j)=W(i,j)F_\ell(i, j) = W_\ell(i, j) if z(i)=(Wx1+b)i>0z(i) = (W_\ell x_{\ell-1} + b_\ell)_i > 0, else $0$.
    • P=FP1F+QP_\ell = F_\ell P_{\ell-1} F_\ell^\top + Q_\ell.
  • Process noise QQ_\ell: Estimated as the empirical sample covariance of held-out layer activations, capturing model error (weights/bias uncertainty).

Only the input layer uses the measurement update; at all deeper layers, the update step is omitted (H=0H_\ell=0 for >0\ell>0), reducing the recursion to repeated prediction. The output (xL,PL)(x_L, P_L) gives an approximate Gaussian posterior (mean and covariance) over final DNN outputs (Titensky et al., 2018).

3. Systematic Covariance Calibration and Learned Confidence Weighting

Despite the formal covariance propagation of the EKF, empirical results demonstrate that EKF-predicted uncertainty is systematically miscalibrated—typically over-confident. In visual-inertial localization (Tsuei et al., 2021), miscalibration results from:

  • First-order linearization (neglecting higher-order Jacobian terms).
  • Static noise covariances (RR, QQ) that do not adapt to trajectory or state.
  • Non-Gaussianities in sensor noise and observation functions.

To correct this, (Tsuei et al., 2021) introduces a post-hoc learned calibration map ϕ\phi applied to each P^k\hat P_k:

  • Simple scaling Pk=sP^kP'_k = s \hat P_k.
  • Linear transformation Pk=AP^kAP'_k = A \hat P_k A^\top.
  • Neural networks mapping P^k\hat P_k (or (x^k,P^k)(\hat x_k, \hat P_k)) to a lower-triangular matrix QkQ_k, then setting Pk=QkQkP'_k = Q_k Q_k^\top.

Calibration targets either Monte Carlo or locally ergodic estimates of ground-truth covariance, with loss given by squared error over upper-triangular (i,j)(i, j) entries, weighted to prioritize diagonals and main blocks. Replacing P^kPk\hat P_k \leftarrow P'_k in the EKF recursion empirically restores correct χ2\chi^2 coverage, with neural network calibration substantially outperforming scalar or linear transforms (Tsuei et al., 2021).

4. Applications and Algorithmic Workflows

A. DNN Uncertainty Propagation

The confidence-weighted EKF algorithm for DNNs executes as follows:

1
2
3
4
5
6
7
Input: pretrained {W_ℓ, b_ℓ}, x (mean), P (cov), {Q_ℓ} (process noise)
for ℓ = 1,...,L:
    z = W_ℓ · x + b_ℓ
    x = ReLU(z)
    F_ℓ[i,j] = W_ℓ[i,j] if z[i]>0 else 0
    P = F_ℓ · P · F_ℓ.T + Q_ℓ
return x_L, P_L
This yields layerwise mean and covariance, propagating input uncertainty and incorporating layerwise model error.

B. Online Optimization via EKF Recursion

Confidence-weighted EKF is interpreted as a second-order online optimizer for generalized linear models. At each step:

  • Adapt learning rate and update direction using the current posterior covariance PtP_t.
  • Update Pt+1P_{t+1} to reflect reduced uncertainty after observing a new data point.

This mechanism achieves per-coordinate learning rate adaptation and provides rigorous excess risk guarantees (Vilmarest et al., 2020).

C. Visual-Inertial Localization

The EKF is enhanced by learning a mapping from internal to calibrated covariance estimates, then using this mapping online in the EKF update, improving statistical calibration as measured by both empirical coverage and DL2\mathcal D_{L_2} divergence from the theoretical χ2\chi^2 distribution (Tsuei et al., 2021).

5. Computational Tradeoffs and Performance

EKF-based confidence weighting achieves one forward pass and one Jacobian–covariance update per step or per DNN layer, scaling as O(Ln2)O(L n^2) per input in the DNN context (with nn the intermediate layer dimension) (Titensky et al., 2018). Compared to Monte Carlo or unscented transforms (which require O(M)O(M) forward passes per input, Mdim(x0)M\gtrsim \operatorname{dim}(x_0)), EKF is substantially more efficient. When process noise QQ_\ell is set to $0$, EKF’s standard deviations match those from Monte Carlo almost exactly. Including nonzero QQ_\ell leads to larger, more realistic uncertainty intervals, as the filter now accounts for model error. For high-dimensional layers, PP_\ell can become dense and expensive, limiting scalability unless covariance is simplified (e.g., diagonal truncation) (Titensky et al., 2018).

In learned calibration scenarios, memoryless neural networks mapping current P^k\hat P_k recover almost all observed covariance miscalibration; incorporating x^k\hat x_k yields only marginal improvement (Tsuei et al., 2021). The computational cost of training such correctors is amortized over their use in online or streaming applications.

6. Assumptions, Limitations, and Theoretical Guarantees

Typical assumptions imposed for EKF-based uncertainty quantification include:

  • Gaussian input and process noise; output distribution is approximate Gaussian.
  • Activation functions must be piecewise linear (e.g., ReLU) or differentiable for efficient Jacobian computation.
  • No intermediate observations within non-output layers, so only early initialization conveys external information in DNNs (Titensky et al., 2018).
  • Covariance matrices PP_\ell may become impractically large in very high-dimensional state spaces.

Limitations include inability to track multi-modal distributions (sampling-based methods retain this capability at higher cost), dependence on accurate estimation of QQ_\ell (model noise) or calibrated mappings, and absence of closed-form guarantees that the learned ϕ\phi maps do not introduce filter instability if used recursively (Titensky et al., 2018, Tsuei et al., 2021). Theoretical analyses in stochastic optimization demonstrate entry into a local region near the optimum in finite time, followed by logarithmic regret scaling in the local phase under standard regularity conditions (Vilmarest et al., 2020).

7. Directions for Extension and Open Challenges

Empirical evidence suggests systematic miscalibration of covariance estimates is a universal phenomenon in nonlinear EKF-based filters with fixed RR, QQ and first-order approximations (Tsuei et al., 2021). Learned or data-driven calibration functions show high effectiveness for restoring statistical coverage, particularly those that operate on covariance alone. The feasibility of a fully end-to-end “confidence-weighted EKF,” where the mapping is integrated into the recursion or predicted by a recurrent neural network, raises open questions on stability and closed-loop consistency. Practical covariance truncation and feature engineering for calibration mappings remain active areas. Generalization of these strategies to other fusion architectures (radar-inertial, GNSS-inertial) is plausible wherever systematic error in posterior uncertainty estimation is observed (Tsuei et al., 2021).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Confidence-Weighted Extended Kalman Filter (EKF).