Maximum Correntropy Kalman Filter
- The MCKF replaces the MMSE criterion with the maximum correntropy criterion to robustly handle non-Gaussian noise and impulsive outliers.
- It employs Chandrasekhar-type recursions to achieve computational efficiency while maintaining accurate state estimation compared to standard approaches.
- Adaptive kernel size selection dynamically adjusts the influence of outliers, ensuring stable performance in real-time, heavy-tailed noise environments.
The Maximum Correntropy Kalman Filter (MCKF) constitutes a robust class of state estimators for stochastic dynamic systems subjected to non-Gaussian, impulsive, or heavy-tailed noise environments. By replacing the classical minimum mean-square error (MMSE) objective with the maximum correntropy criterion (MCC), the MCKF improves resilience to outliers through the down-weighting of large deviations by an exponential kernel. The development of fast and numerically stable implementations—most notably the Chandrasekhar-based (factorized) forms—enables high-reliability, computationally efficient estimation in real-time and ill-conditioned scenarios.
1. The Maximum Correntropy Criterion in Kalman Filtering
Correntropy, defined for random variables and as with a positive-definite kernel (commonly Gaussian: ), quantifies localized similarity with intrinsic robustness to large errors. The MCC-based Kalman filtering framework replaces the MMSE loss on prediction and innovation with a combination of two symmetric kernel evaluations: For the classical linear state-space model
(, ), the MCC-KF propagates mean and covariance as in the standard filter, then modifies the measurement update with a data-driven weight that down-weights outlier innovations and state-prediction errors. This penalizes outlier impact and improves estimation accuracy under non-Gaussian disturbances (Kulikova, 2023, Chen et al., 2015).
2. Riccati and Chandrasekhar-Type Recursions in MCKF
In Riccati-based MCC-KF, define the predicted state and covariance as
In the measurement update, introduce
where
Adaptive or constant kernel bandwidth strategies can be employed; in many practical cases, a constant simplifies the filter and enables optimized recursions (Kulikova, 2023).
The Chandrasekhar-type implementation propagates the low-rank difference , factored as where : with compact factor recursions for and , as well as efficient innovation and gain updates (Kulikova, 2023). These recursions result in reduced computational costs compared to direct Riccati propagation.
3. Adaptive Kernel Size Selection and Adjusting Weight
In practice, the kernel bandwidth significantly influences robustness and convergence. It may be made adaptive, for example by heuristic or fixed-point rules based on innovation magnitudes, ensuring that the corresponding is effectively constant over time. When , innovation covariances inflate and Riccati updates are damped, which increases the filter's resilience to heavy-tailed noise (Kulikova, 2023). Empirical guidelines recommend tuning relative to inlier residual distribution spread for optimal performance (Chen et al., 2015).
4. Implementation and Variants: Fast and Stable Chandrasekhar MCKF
Algorithmic Steps
Initialization ():
- Set , .
- Compute , .
- Form and factor , yielding .
For :
- Update , , , via the factorized Chandrasekhar recursions.
- State prediction: .
Complexity: When and is moderate, Chandrasekhar MCKF requires per step. This results in substantial reductions in runtime as increases, with identical RMSE performance compared to Riccati-based MCKF (Kulikova, 2023).
Algorithmic Variants: Several variants (Algorithms 2–4) are available to further minimize computational burden, such as using prior-step gain values or applying the Sherman–Morrison–Woodbury identity to maintain only small-dimension matrix inversions.
5. Numerical Performance, Robustness, and Practical Significance
In satellite in-track motion estimation (model of Rauch et al., 1965 with ), the Chandrasekhar-type MCC-KF achieves RMSE values identical to those of Riccati-based implementations at reduced CPU cost. For larger-scale systems (large , low innovation rank ), efficiency and scalability are even more pronounced. The robustness of the filter to impulsive outliers surpasses that of the classical KF, and the filter maintains numerical stability due to its square-root, small-matrix propagation structure (Kulikova, 2023).
Empirical summary:
| Method | RMSE | CPU Cost | Robustness to Outliers | Scalability () |
|---|---|---|---|---|
| Riccati MCKF | = | High | High | Poor |
| Chandrasekhar MCKF | = | Low | High | Excellent |
| Standard KF | Worse | Lower | Poor | Moderate |
The use of adaptive kernels ensures that the filter dynamically responds to regime changes in the outlier rate and noise structure, maintaining filter stability and accuracy.
6. Context within Robust and Fast Kalman Filtering
The Chandrasekhar-based MCKF extends both classical Chandrasekhar covariance-differential methods (Morf-Sidhu-Kailath-Sayed algorithms) and ICC-based robust Kalman filtering. While typical robust filters address only measurement noise, MCC-based filtering simultaneously down-weights state-prediction and innovation outliers. This unified approach yields computational gains—especially for high-dimensional, low-rank update scenarios—and makes Chandrasekhar-type methods attractive for real-time, embedded, or high-reliability applications (Kulikova, 2023).
7. Limitations and Directions for Future Research
Current Chandrasekhar-type MCKF derivations require the adjusting weight to be constant, enabled by adaptive kernel size strategies. Open problems include the extension to fully time-varying , more sophisticated adaptive kernel selection schemes, and application-specific tuning for high-noise or highly nonlinear environments. Further development of factored forms, including block-sparse and parallelizable variants, is expected to expand the operational range and robustness guarantees.
References:
- Kulikova, “Chandrasekhar‐based Maximum Correntropy Kalman Filtering with Adaptive Kernel Size Selection” (Kulikova, 2023)
- Chen et al., “Maximum Correntropy Kalman Filter” (Chen et al., 2015)