Papers
Topics
Authors
Recent
2000 character limit reached

Adaptive State Estimation

Updated 29 December 2025
  • Adaptive state estimation is a set of methodologies that adjust estimator parameters dynamically based on incoming data and system changes.
  • It enables real-time tuning of noise covariances, quantizer thresholds, and sensor actions to enhance estimation accuracy.
  • These techniques integrate filter fusion, machine learning, and active sensing to achieve scalable, robust state tracking in diverse applications.

Adaptive state estimation refers to a broad class of methodologies in which the structure, parameters, or measurement actions of a state estimator are adaptively modified in response to data, changing system properties, or operational requirements. These approaches span classical control, signal processing, power systems, quantum measurement, structural monitoring, and machine learning. Central themes include online learning of dynamic and measurement noise covariances, adaptive adjustment of quantizer or sampling parameters, active and sequential sensing, adaptive reduction of large-dimensional systems, and the principled fusion of heterogeneous or asynchronous data. Adaptive state estimation strategies are typically motivated by time-varying environments, non-Gaussian or unknown noise, limited communication, or the need for computational tractability at scale.

1. Mathematical Frameworks and Foundational Concepts

The mathematical foundation of adaptive state estimation is most commonly established within the finite- or infinite-dimensional state-space formalism: xk+1=f(xk,uk,wk),yk=h(xk,vk)x_{k+1} = f(x_k, u_k, w_k), \quad y_k = h(x_k, v_k) where xkx_k is the (possibly high-dimensional) state, uku_k is the control or action, yky_k is the observation, and wk,vkw_k, v_k are process and observation noise. Adaptation may occur in multiple aspects:

2. Adaptive State Estimation under Weak Adaptive Submodularity

Weak adaptive submodularity generalizes adaptive submodular function maximization for sequential decision problems, providing approximation guarantees for greedy adaptive policies in active state estimation (Yong et al., 2017). Let the system state xx take values in a finite set X\mathcal{X} with stochastic observations, and let the agent sequentially select sensing actions v∈Vv \in \mathcal{V} to maximize the expected reduction in state uncertainty (quantified by a nonnegative, monotone reward function ff). If for any partial realization ψt\psi_t of actions and observations, the expected marginal benefit Δ(v∣ψt)\Delta(v|\psi_t) satisfies

Δ(v∣ψt′)≤ζΔ(v∣ψt)∀ψt⊆ψt′, v∉v1:t′\Delta(v|\psi_t') \leq \zeta \Delta(v|\psi_t) \quad \forall \psi_t \subseteq \psi_t', \, v \notin v_{1:t'}

for some ζ≥1\zeta \geq 1 (the adaptive submodularity factor), then ff is ζ\zeta-weakly adaptive submodular. For group-based active diagnosis with persistent faults, this property is established, leading to the following guarantee for greedy action selection: favg(πkgreedy)>(1−e−1/ζ)favg(πk∗)f_{\mathrm{avg}}(\pi^{\mathrm{greedy}}_k) > (1 - e^{-1/\zeta})f_{\mathrm{avg}}(\pi^*_k) where πk∗\pi^*_k is optimal under a budget of kk actions. Empirically, in aircraft electrical system state estimation with sensor faults, adaptive greedy policies achieved performance indistinguishable from exhaustive policies at polynomial computational cost (Yong et al., 2017).

3. Adaptive Estimation for Quantized and Resource-Constrained Measurement

State estimation with quantized or rate-constrained measurements motivates adaptive tuning of quantizer thresholds or sampling schedules. In the set-membership context, optimal adaptive quantization reduces estimation uncertainty (measured by the radius of information) by adjusting thresholds at each time step based on the current feasible state set (Casini et al., 2023):

  • Quantizer adaptation: For scalar or vector quantizers, central threshold Ï„c(k)\tau_c(k) and step size Δ(k)\Delta(k) are dynamically chosen to minimize the largest possible post-update set radius, guaranteeing that the uncertainty remains bounded under mild observability and quantizer-resolution conditions.
  • Recursive geometric set updates: Outer-approximations (parallelotopes, zonotopes, constrained zonotopes) are propagated and intersected for feasible set update, with a computational-accuracy tradeoff.
  • Numerical performance: With adaptive thresholds, set diameters may be reduced by factors of 2–3 compared to fixed quantizers, at moderate computational cost dependent on set-representation (Casini et al., 2023).

Separately, in adaptive sampling, the optimal rate allocation (given a finite sample budget) is cast as a sequence of optimal stopping times triggered by the estimation error exiting a dynamically shrinking envelope. For Gaussian diffusions, closed-form envelope solutions yield distortion improvements up to 70% over uniform or event-triggered (Delta) sampling (0904.4358).

4. Algorithmic Adaptivity: Covariance Learning and Filter Fusion

Adaptive state estimators frequently include online identification of process/measurement noise covariances, model uncertainty measures, or weighting factors:

  • Adaptive Ensemble Filtering: Nonparametric filters using jackknife-generated ensemble priors, updating Q,RQ, R via bias-corrected hold-out residuals and matching posterior/ensemble variances, enhance robustness to unknown noise and misspecifications (Busch et al., 2014).
  • Adaptive Fading and Robust Nonlinear Filtering: Adaptive fading CKF and robust AUKF variants utilize double transitive scaling of covariance and/or measurement noise matrices, driving them using innovation and residual statistics (possibly under entropy-based or mixture-correntropy objectives). These methods yield improved accuracy and stability under unknown, non-Gaussian, and nonstationary noise (Narasimhappa, 2021, Nguyen et al., 10 Apr 2025).
  • Filter Fusion and Performance-Based Switching: In nonlinear/non-Gaussian settings, the estimator can switch between multiple filters (EKF, UKF, PF) according to posterior Cramér–Rao lower bound (PCRLB) tightness. A particle filter provides an online approximation of the PCRLB, and switching selects the filter whose empirical MSE approaches the (approximated) PCRLB, outperforming any single filter in RMSE for financial volatility estimation (Yashaswi, 2021).

5. Active and Machine Learning-Driven Adaptive Estimation

Adaptive state estimation increasingly integrates sequential experiment design and machine learning:

  • Active Sensing via Adaptive Submodularity: The generalization to weak adaptive submodularity permits near-optimal adaptive greedy action selection for noisy, correlated, or group-based diagnosis problems (Yong et al., 2017).
  • DRL-Enabled State Estimation: In the context of distribution system state estimation with multi-rate, multi-source data, deep Q-network (DQN) agents adaptively tune short-term prediction/exponential smoothing parameters to minimize forecast error, fusing synchronized and forecasted measurements in a Kalman or WLS update, leading to considerably improved accuracy over baselines (Zhang et al., 2023).
  • Hybrid Integration in Machine Learning Pipelines: For RL and neuroevolutionary algorithms (Q-learning, NEAT), accurate adaptive state estimation using particle filtering significantly stabilizes training, accelerates convergence, and mitigates the adverse effects of sensor noise (Song et al., 10 Apr 2025). The filtered state provides a bias-corrected, low-variance input to both value-function updates and evolutionary fitness evaluation, outperforming baseline learning strategies that operate directly on noisy data.

6. Large-Scale and Application-Specific Adaptive State Estimation

Adaptive strategies enable tractable state estimation in high-dimensional and physically structured systems:

  • Model Reduction via Clustering: For large-scale PDE-constrained systems (e.g., agro-hydrological fields), adaptive clustering of state trajectories and Petrov–Galerkin projection yield reduced models whose dimension adapts to local dynamics. An adaptive moving-horizon estimator then operates on a dynamically reduced state, ensuring feasibility and robustness as dynamics change (Sahoo et al., 2021).
  • Sensor Fusion and Self-Tuning: In smart structures, adaptive fusion of strain-gauge and camera data utilizes out-of-sequence measurement (OOSM) updating, self-tuning of model and observer parameters via minimization of camera-based position discrepancies, and greedy sensor-selection to maximize system observability with minimal instrument count (Warsewa et al., 2020).
  • Spectral Estimation: Adaptive multitaper state-space spectral estimation adapts process-noise intensities window- and frequency-wise, driven by exponential smoothing of the local nonstationarity metric, improving denoising and spectral tracking for nonstationary biomedical signals (Song et al., 2021).

7. Quantum and Statistical Physics: Adaptive Estimation Schemes

In quantum systems, adaptive protocols are crucial for overcoming informational or fundamental measurement constraints:

  • Adaptive Quantum State Estimation (AQSE): Recursively updates the optimal measurement basis based on previous measurement outcomes, achieving strong consistency and asymptotic efficiency (attaining the quantum Cramér–Rao bound) in parameter estimation for quantum states. AQSE generalizes to multi-parameter estimation and can be used in practical quantum information and metrology (Okamoto et al., 2012, Kimizu et al., 2023, Vargas et al., 2024).
  • Adaptive Fidelity and Information Bounds: In scenarios where only certain (diagonal or partial) statistics can be obtained efficiently, adaptive measurement design and mixing of local-POVM statistics enable estimation of tight lower and upper bounds on quantum state fidelity, surpassing static mutually unbiased basis schemes with fewer settings in many cases (Wu, 2020).

Adaptive state estimation constitutes a diverse set of rigorously characterized, high-performance strategies appropriate for systems with time-varying, uncertain, or adversarial attributes. The field leverages advances in stochastic control, optimal experimental design, information theory, and machine learning to realize robust, efficient, and scalable estimation across physical, engineered, and quantum domains. As system dimensionality, heterogeneity, and operating uncertainty all increase, adaptive methodologies are essential for practical real-world deployment and large-scale autonomy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Adaptive State Estimation.