Kalman Filtering Algorithm
- Kalman filtering is a recursive estimation algorithm that computes MMSE estimates for linear systems under Gaussian noise through prediction and update steps.
- It unifies Bayesian filtering, prediction-error minimization, and deterministic least-squares methods, providing a robust and adaptable state estimation framework.
- Extensions include adaptive schemes with forgetting factors, robust filtering with NUV priors, distributed architectures, and nonlinear variants using Koopman operators.
A Kalman filter is a recursive algorithm for estimating the hidden state of a linear dynamical system given noisy control and observation data. Under Gaussian noise assumptions, the Kalman filter computes the minimum mean-square error (MMSE) estimate by sequentially propagating mean and covariance of the state conditional on all past measurements. It unifies Bayesian filtering, prediction-error minimization, and deterministic least-squares cost minimization within a single quadratic framework, and admits a wide range of extensions for robustness, adaptivity, and distributed computation.
1. Mathematical Formulation and Fundamental Structure
Consider a discrete-time linear system
where is the hidden state, is a known control, is the measurement, is process noise, and is measurement noise.
Let and denote the filtered mean and covariance after assimilating ; and denote the one-step-ahead prediction. The Kalman recursion is: This recursion yields the exact MMSE estimate for linear-Gaussian models (Baltieri et al., 2021).
The filter can also be posed as the minimizer of a least-squares cost over the entire data history, where the recursive update is equivalent to a one-step Newton (Gauss-Newton) descent on this quadratic objective (Lai et al., 16 Apr 2024).
2. Cost Function Characterization and Unified Least-Squares Perspective
A key perspective is the Kalman filter least-squares (KFLS) cost, defined for the full state trajectory as: where is the noiseless propagation of state from time to , and is a “forgetting” matrix allowing for robustness to nonclassical disturbances (Lai et al., 16 Apr 2024). Minimizing recursively yields the standard Kalman update, and choices of directly specify extensions to adaptive and robust filtering via connections to RLS and its variants.
This cost-based unification allows direct embedding of RLS forgetting schemes (exponential, directional, etc.) into the state estimation framework, bridging deterministic and probabilistic formulations.
3. Adaptive and Robust Extensions
a. Forgetting-Based Adaptive Kalman Filters
By augmenting the prior covariance at each step with an RLS-style forgetting term, , the Kalman filter can rapidly increase Kalman gain and error covariance in response to outliers or abrupt system changes: With , can be tuned online via residual statistics (Lai et al., 16 Apr 2024).
b. Outlier-Insensitive Kalman Filtering via NUV Priors
Robust filtering in impulsive-noise environments can also be achieved by augmenting the measurement model with per-component normal-with-unknown-variance (NUV) outlier terms. These variances are estimated online (via EM or alternating maximization) and act as adaptive reweightings in the measurement covariance matrix. The filter reverts to the standard Kalman filter if no outliers are present, while equaling or outperforming prior robust KFs under outlier contamination (Truzman et al., 2022).
c. Empirical Noise Models and Non-Gaussian Noise
Kalman-type algorithms incorporating empirically estimated, non-Gaussian measurement noise distributions can be constructed by fitting a monotonic spline to measurement residuals, then embedding this mapping as a non-Gaussian innovation in an augmented state. An iterated posterior linearization filter (IPLF) then approximates the required non-Gaussian update. This approach achieves more accurate state estimation than standard KFs when empirical noise deviates significantly from Gaussianity (Raitoharju et al., 2021).
4. Generalizations and Distributed Architectures
a. Time-Varying and State-Dependent Covariances
For systems where the process noise covariance depends on the current state, the prediction step is generalized: leading to improved performance in inhomogeneous or state-dependent-noise settings. Such filters are MMSE-optimal (linear case), or minimize a local quadratic cost in the nonlinear case (Gola et al., 29 Jul 2025).
b. Filters for Cross-Correlated Process/Measurement Noise
When process and measurement noise are cross-correlated , the filter can be transformed (by subtracting the feedthrough and matching covariances) into a standard Kalman recursion with adjusted dynamics and noise statistics to remove bias and improve estimation error, especially at high positive correlation (Khalid, 10 Jul 2025).
c. Distributed Kalman Filters
Distributed filtering algorithms—where each agent maintains a local estimate using its own data and neighbors’ information—take several forms:
- Minimum-time consensus DKF: Nodes compute network-wide averages of measurement noise covariances in finite steps, ensuring exact equivalence to the centralized Kalman filter after each consensus round. This achieves centralized-level performance with only local communication and computation (Yuan et al., 2017).
- Randomized gossip-based DKF: Nodes exchange and average local estimates in a randomized pairwise manner, stochastically propagating information throughout the network. The resulting mean-squared error converges to a unique steady-state that strictly improves on the noncooperative KF (Qin et al., 2018).
- Diffusion-type DKF with covariance intersection: Adaptation and combination steps allow robust, scalable estimation even under nonstationary and non-independent signal models, provided the network as a whole is sufficiently informative (Xie et al., 2 Nov 2024).
d. Flexible State Representations and Smoothers
Algorithms such as UltimateKalman employ orthogonal block QR factorizations for general state-space models with time-varying dimension and possibly unknown initial condition, enabling numerically robust streaming inference and efficient (blockwise) backward smoothing (2207.13526).
e. Nonlinear and Manifold-Valued Extensions
For nonlinear models, the Koopman Kalman Filter (KKF) leverages a lift to an empirical finite-dimensional space of observables via EDMD, yielding a Kalman recursion with provable operator approximation error, and maintaining optimality for linear systems (Olguín et al., 6 Nov 2025). On Lie groups (e.g., SO(3)), reversibility and error bounding can be restored via geometric correction to measurement updates (Covanov et al., 22 Sep 2025).
5. Applications and Specialized Domains
Kalman filtering underpins a broad array of use cases:
- Signal and image denoising: Application of median-prediction and Laplacian-based AWGN estimation within a Kalman framework yields substantial PSNR improvements in 3D signal (video) denoising, with computational complexity reduction via stationary-gain and elementwise updates (Khmou et al., 2013).
- Speech enhancement: Modulation-domain Kalman filtering for dereverberation and denoising uses nonlinear (log-spectral) signal models and tracks time-varying acoustic parameters (e.g., , DRR) via adaptive filtering, outperforming baseline speech enhancement algorithms across multiple metrics (Dionelis et al., 2018).
- Joint state and parameter estimation: Integration of recursive generalized extended least squares (GELS) with Kalman updates enables simultaneous state and parameter identification, especially effective with cross-correlated process and measurement noise (Khalid, 10 Jul 2025).
- Sensor bias and registration in multitarget scenarios: Decoupled multitarget filters with bias-fusion and cross-feedback steps have been shown to be exactly equivalent to large augmented state filters but allow much more scalable implementation (Yi et al., 2018).
- Neural and active inference: Steady-state Kalman updates emerge as the fixed points of gradient descent on variational free-energy functionals, providing a normative framework for several neural and cognitive models and suggesting plausible local, predictive-coding-based neural implementations (Baltieri et al., 2021, Millidge et al., 2021).
6. Limitations and Implementation Considerations
While the classical Kalman filter is optimal for linear, purely Gaussian settings, practical deployments often confront model mismatch, nonlinearity, heavy-tailed noise, or structural changes. Adaptive schemes—such as dynamic forgetting, robustified covariances, non-Gaussian updates, or consensus protocols—directly address these limitations but introduce new challenges:
- Need for careful tuning of forgetting factors, online residual estimation, and detection mechanisms (Lai et al., 16 Apr 2024).
- Potential instability or conservatism with state-dependent or time-varying noise statistics; some extensions lack full theoretical guarantees and require case-by-case analysis (Gola et al., 29 Jul 2025).
- Suboptimality of Covariance Intersection and decentralized fusion relative to centralized filtering in some regimes (Xie et al., 2 Nov 2024).
- Increased computational demand for non-Gaussian, high-dimensional, or nonlinear variants, mitigated by algorithmic simplification (UltimateKalman QR streaming, sigma-point methods) or cost-based pruning (elementwise, steady-gain, or approximate consensus).
7. Research Trends and Outlook
Recent developments continue to generalize the Kalman filter into ever-more flexible and robust state estimation tools:
- Unified cost functions that encompass both deterministic and probabilistic estimation (KFLS, RLS, robustified least squares) (Lai et al., 16 Apr 2024).
- Koopman-operator-based lifts for nonlinear filtering with explicit performance guarantees (Olguín et al., 6 Nov 2025).
- Highly parallel and modular distributed architectures for scalability and resilience (Yuan et al., 2017, Qin et al., 2018, Xie et al., 2 Nov 2024).
- Outlier adaptation and empirical noise models for robust inference without sacrificing efficiency or bias (Truzman et al., 2022, Raitoharju et al., 2021).
- Explicit connections to neural coding and active inference for computational neuroscience (Baltieri et al., 2021, Millidge et al., 2021).
Broadly, the Kalman filtering paradigm—classical and contemporary—remains foundational for online estimation and real-time inference in control, signal processing, and computational sciences.