Augmented Kalman Filter (AKF)
- Augmented Kalman Filter (AKF) is an advanced state estimation method that jointly identifies model parameters and induces sparsity for adaptive, online system identification.
- It utilizes an augmented state-space model with UKF-based prediction-update steps and incorporates Bayesian ARD for dynamic, interpretable model structure selection.
- AKF demonstrates practical gains in control engineering and computational physics, offering improved accuracy and robustness over standard filtering approaches.
The Augmented Kalman Filter (AKF) is an extension of classic Kalman filtering frameworks that incorporates model-parameter identification and sparse model structure selection jointly with state estimation. This paradigm is foundational for online dynamic system identification under noise, partial observability, and model uncertainty. AKF serves as the backbone for a variety of Sparse Kalman Identification (SKI) algorithms, all of which leverage recursive Bayesian filtering and sparsification strategies for interpretable, adaptive, and accurate physical modeling in fields ranging from control engineering to computational physics (Mei et al., 22 Nov 2025).
1. Mathematical Formulation and State–Parameter Augmentation
AKF operates on a discrete-time nonlinear state-space system: where is the unmeasured state, the known input, the unknown nonlinear term (parameterized by a dictionary expansion), and are process and measurement noise, respectively (Mei et al., 22 Nov 2025). The unknown dynamic function is written as a linear combination of basis functions: Here, is an overcomplete library of nonlinear basis functions, and are the unknown weights.
AKF introduces an augmented state: and a transition function ensuring time-propagation of both state and model parameters: with observations (Mei et al., 22 Nov 2025).
2. Recursive Filtering: UKF Integration and Posterior Updates
State–parameter estimation proceeds via recursive Kalman filtering in the augmented space. Prediction and update steps employ the Unscented Kalman Filter (UKF) for improved numerical stability, leveraging a joint mean–covariance propagation of both system state and model coefficients.
- Prediction: Compute and for the predicted mean and covariance.
- Update: On measurement , apply standard Kalman update with gain :
where denotes the linearized measurement mapping; posterior mean and covariance are:
The process is repeated over all time steps (Mei et al., 22 Nov 2025).
3. Sparse Structure Induction via Bayesian ARD
Sparsification of the model is achieved by embedding Automatic Relevance Determination (ARD) priors on the parameter block : Online ARD maximizes the marginal likelihood sequentially over the hyperparameters using gradient descent. The AKF posterior is updated to reflect new prior variances via a pseudo-measurement correction: where the correction employs standard Gaussian conditioning formulas (Mei et al., 22 Nov 2025).
This facilitates adaptive model structure selection: as ARD variances diminish, their associated basis functions are pruned, yielding an evolving, parsimonious model.
4. Algorithmic Workflow and Computational Aspects
A prototypical AKF/SKI algorithm follows these sequential steps:
- Measure .
- AKF (UKF) predict–update to obtain current posterior .
- For iterations, update prior variances via ARD gradient descent.
- Apply pseudo-measurement correction to posterior for new prior.
- Output updated mean/covariance (Mei et al., 22 Nov 2025).
Each prediction and update step incurs cubic cost in augmented state size due to Cholesky factorization; ARD steps are cubic in model size but remain tractable for moderate-sized dictionaries.
5. Comparative Performance and Real-World Applications
Empirical studies demonstrate AKF-based SKI methods achieve significant improvements in model selection accuracy and robustness. For instance, in WingRock benchmark experiments, mean error for SKI (AKF + ARD) reached 0.15 versus 0.95 for baseline UKF and 11.45 for basic SINDy, reflecting an 84.21% improvement (Mei et al., 22 Nov 2025). Time-delay feature selection in dynamic models is also enabled by AKF: SKI rapidly zeroes out all delays except at optimal lag, as indicated by ARD variances.
In quadrotor UAV system identification (simulated and real data), AKF with ARD consistently selected physically meaningful terms—constant, linear-PWM, and linear-drag—while standard UKF retained dense, non-interpretable coefficient blocks.
6. Limitations, Practical Guidelines, and Extensions
AKF-based SKI scales best with moderate dictionary dimensionality ( in tens); for larger bases, preliminary feature selection or clustering is recommended. Model non-Gaussianity may necessitate replacement of UKF with particle filters. Guidelines for tuning hyperparameters (ARD initialization, process and measurement noise covariance, step size) are provided for practical deployments.
Limitations include sensitivity to excitation richness and slow re-identification rates for highly dynamic systems. Extensions include integration of time-varying process/measurement covariances, error modeling augmentation, and use of alternative sparsification schemes.
7. Significance and Context in System Identification
AKF unifies state-observation filtering, online system identification, and interpretable Bayesian sparsification, enabling real-time, adaptive model construction under sequential data and partial measurement scenarios. The methodology obviates batch-learning requirements and “full history” data, supporting efficient state-tracking, robust parameter inference, and explicit model structure selection. Applications encompass adaptive monitoring, fault detection, and control synthesis in high-dimensional, noise-perturbed environments, with quantitative gains over baseline Kalman filtering and sparse regression (Mei et al., 22 Nov 2025).