Papers
Topics
Authors
Recent
2000 character limit reached

Maximum Correntropy Kalman Filter

Updated 30 November 2025
  • The MCKF replaces the MMSE criterion with the maximum correntropy criterion to robustly handle non-Gaussian noise and impulsive outliers.
  • It employs Chandrasekhar-type recursions to achieve computational efficiency while maintaining accurate state estimation compared to standard approaches.
  • Adaptive kernel size selection dynamically adjusts the influence of outliers, ensuring stable performance in real-time, heavy-tailed noise environments.

The Maximum Correntropy Kalman Filter (MCKF) constitutes a robust class of state estimators for stochastic dynamic systems subjected to non-Gaussian, impulsive, or heavy-tailed noise environments. By replacing the classical minimum mean-square error (MMSE) objective with the maximum correntropy criterion (MCC), the MCKF improves resilience to outliers through the down-weighting of large deviations by an exponential kernel. The development of fast and numerically stable implementations—most notably the Chandrasekhar-based (factorized) forms—enables high-reliability, computationally efficient estimation in real-time and ill-conditioned scenarios.

1. The Maximum Correntropy Criterion in Kalman Filtering

Correntropy, defined for random variables XX and YY as V(X,Y)=E[kσ(XY)]V(X, Y) = \mathbb{E}[k_\sigma(X-Y)] with a positive-definite kernel (commonly Gaussian: kσ(e)=exp(e2/(2σ2))k_\sigma(e) = \exp(-e^2 / (2\sigma^2))), quantifies localized similarity with intrinsic robustness to large errors. The MCC-based Kalman filtering framework replaces the MMSE loss on prediction and innovation with a combination of two symmetric kernel evaluations: x^kk=argmaxx[kσ(xFx^k1k1)+kσ(ykHx)]\widehat{x}_{k|k} = \arg\max_x \left[ k_\sigma(\| x - F\,\widehat{x}_{k-1|k-1} \|) + k_\sigma(\| y_k - H\,x \|) \right] For the classical linear state-space model

xk+1=Fxk+Gwk,yk=Hxk+vkx_{k+1} = F x_k + G w_k, \quad y_k = H x_k + v_k

(wkN(0,Q)w_k \sim \mathcal{N}(0, Q), vkN(0,R)v_k \sim \mathcal{N}(0, R)), the MCC-KF propagates mean and covariance as in the standard filter, then modifies the measurement update with a data-driven weight λk<1\lambda_k<1 that down-weights outlier innovations and state-prediction errors. This penalizes outlier impact and improves estimation accuracy under non-Gaussian disturbances (Kulikova, 2023, Chen et al., 2015).

2. Riccati and Chandrasekhar-Type Recursions in MCKF

In Riccati-based MCC-KF, define the predicted state and covariance as

x^kk1=Fx^k1k1,Pkk1=FPk1k1F+GQG\widehat{x}_{k|k-1} = F \widehat{x}_{k-1|k-1}, \quad P_{k|k-1} = F P_{k-1|k-1} F^\top + G Q G^\top

In the measurement update, introduce

Re,kλ=R+λkHPkk1HR_{e,k}^\lambda = R + \lambda_k H P_{k|k-1} H^\top

Kkλ=λkPkk1H(Re,kλ)1K_k^\lambda = \lambda_k P_{k|k-1} H^\top (R_{e,k}^\lambda)^{-1}

x^kk=x^kk1+Kkλ(ykHx^kk1)\widehat{x}_{k|k} = \widehat{x}_{k|k-1} + K_k^\lambda (y_k - H \widehat{x}_{k|k-1})

Pkk=(IKkλH)Pkk1P_{k|k} = (I-K_k^\lambda H) P_{k|k-1}

where

λk=exp(ykHx^kk12/(2σk2))exp(x^kk1Fx^k1k12/(2σk2))\lambda_k = \frac{\exp(-\| y_k - H\, \widehat{x}_{k|k-1} \|^2/(2\sigma_k^2))}{\exp(-\| \widehat{x}_{k|k-1} - F \widehat{x}_{k-1|k-1} \|^2/(2\sigma_k^2))}

Adaptive or constant kernel bandwidth strategies can be employed; in many practical cases, a constant λ\lambda simplifies the filter and enables optimized recursions (Kulikova, 2023).

The Chandrasekhar-type implementation propagates the low-rank difference Δk+1=Pk+1kPkk1\Delta_{k+1} = P_{k+1|k} - P_{k|k-1}, factored as LkMkLkL_k M_k L_k^\top where αn\alpha \ll n: Δk+1=(FλKp,kH)(Δk+λPkk1H(Re,k1λ)1HPkk1)(FλKp,kH)\Delta_{k+1} = (F - \lambda K_{p,k} H)\left(\Delta_k + \lambda P_{k|k-1} H^\top (R_{e,k-1}^\lambda)^{-1} H P_{k|k-1}\right) (F - \lambda K_{p,k} H)^\top with compact factor recursions for LkL_k and MkM_k, as well as efficient innovation and gain updates (Kulikova, 2023). These recursions result in reduced computational costs compared to direct Riccati propagation.

3. Adaptive Kernel Size Selection and Adjusting Weight

In practice, the kernel bandwidth σk\sigma_k significantly influences robustness and convergence. It may be made adaptive, for example by heuristic or fixed-point rules based on innovation magnitudes, ensuring that the corresponding λk\lambda_k is effectively constant over time. When λ<1\lambda < 1, innovation covariances inflate and Riccati updates are damped, which increases the filter's resilience to heavy-tailed noise (Kulikova, 2023). Empirical guidelines recommend tuning σk\sigma_k relative to inlier residual distribution spread for optimal performance (Chen et al., 2015).

4. Implementation and Variants: Fast and Stable Chandrasekhar MCKF

Algorithmic Steps

Initialization (k=0k=0):

  • Set x^01\widehat{x}_{0|-1}, P01P_{0|-1}.
  • Compute Re,0λR_{e,0}^\lambda, Kp,0K_{p,0}.
  • Form and factor Δ1=FΠ0F+GQGλKp,0Re,0λKp,0Π0\Delta_1 = F \Pi_0 F^\top + G Q G^\top - \lambda K_{p,0} R_{e,0}^\lambda K_{p,0}^\top - \Pi_0, yielding L0,M0L_0, M_0.

For k=0,1,k=0,1,\ldots:

  • Update Re,k+1λR_{e,k+1}^\lambda, Kp,k+1K_{p,k+1}, Lk+1L_{k+1}, Mk+1M_{k+1} via the factorized Chandrasekhar recursions.
  • State prediction: x^k+1k=Fx^kk1+λKp,k(ykHx^kk1)\widehat{x}_{k+1|k} = F \widehat{x}_{k|k-1} + \lambda K_{p,k} (y_k - H \widehat{x}_{k|k-1}).

Complexity: When αn\alpha \ll n and mm is moderate, Chandrasekhar MCKF requires O(n2α)+O(m3)O(n^2\alpha) + O(m^3) per step. This results in substantial reductions in runtime as nn increases, with identical RMSE performance compared to Riccati-based MCKF (Kulikova, 2023).

Algorithmic Variants: Several variants (Algorithms 2–4) are available to further minimize computational burden, such as using prior-step gain values or applying the Sherman–Morrison–Woodbury identity to maintain only small-dimension matrix inversions.

5. Numerical Performance, Robustness, and Practical Significance

In satellite in-track motion estimation (model of Rauch et al., 1965 with n=4n=4), the Chandrasekhar-type MCC-KF achieves RMSE values identical to those of Riccati-based implementations at reduced CPU cost. For larger-scale systems (large nn, low innovation rank α\alpha), efficiency and scalability are even more pronounced. The robustness of the filter to impulsive outliers surpasses that of the classical KF, and the filter maintains numerical stability due to its square-root, small-matrix propagation structure (Kulikova, 2023).

Empirical summary:

Method RMSE CPU Cost Robustness to Outliers Scalability (nn \uparrow)
Riccati MCKF = High High Poor
Chandrasekhar MCKF = Low High Excellent
Standard KF Worse Lower Poor Moderate

The use of adaptive kernels ensures that the filter dynamically responds to regime changes in the outlier rate and noise structure, maintaining filter stability and accuracy.

6. Context within Robust and Fast Kalman Filtering

The Chandrasekhar-based MCKF extends both classical Chandrasekhar covariance-differential methods (Morf-Sidhu-Kailath-Sayed algorithms) and ICC-based robust Kalman filtering. While typical robust filters address only measurement noise, MCC-based filtering simultaneously down-weights state-prediction and innovation outliers. This unified approach yields computational gains—especially for high-dimensional, low-rank update scenarios—and makes Chandrasekhar-type methods attractive for real-time, embedded, or high-reliability applications (Kulikova, 2023).

7. Limitations and Directions for Future Research

Current Chandrasekhar-type MCKF derivations require the adjusting weight λ\lambda to be constant, enabled by adaptive kernel size strategies. Open problems include the extension to fully time-varying λk\lambda_k, more sophisticated adaptive kernel selection schemes, and application-specific tuning for high-noise or highly nonlinear environments. Further development of factored forms, including block-sparse and parallelizable variants, is expected to expand the operational range and robustness guarantees.


References:

  • Kulikova, “Chandrasekhar‐based Maximum Correntropy Kalman Filtering with Adaptive Kernel Size Selection” (Kulikova, 2023)
  • Chen et al., “Maximum Correntropy Kalman Filter” (Chen et al., 2015)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Maximum Correntropy Kalman Filter (MCKF).