Papers
Topics
Authors
Recent
2000 character limit reached

Bayesian Multiple-Model Adaptive Estimation (MMAE)

Updated 10 January 2026
  • Bayesian MMAE is a model-based estimation framework that uses a discrete hypothesis set and Bayesian inference to manage uncertainty with parallel estimators.
  • It adaptively updates model probabilities through observation likelihoods, fusing parallel state estimates into a robust overall estimate.
  • Adaptive grid refinement using diversity metrics ensures statistically consistent tracking in applications like attitude calibration and INS/odometer integration.

Bayesian Multiple-Model Adaptive Estimation (MMAE) is a model-based estimation methodology that performs Bayesian inference over a discrete hypothesis set, maintaining a bank of parallel estimators (typically Kalman filter variants) and adaptively updating model probabilities via observation likelihoods. This approach is especially suited for online calibration or fault diagnosis scenarios where the true model or parameter vector is uncertain and must be inferred alongside state estimation. MMAE's discrete hypothesis paradigm, Bayesian update structure, and model-adaptive refinement strategies yield robust, statistically consistent parameter estimates while maintaining tractable computational cost in resource-constrained environments.

1. Problem Formulation and Theoretical Structure

MMAE formulates the unknown parameter (or model index) estimation problem as Bayesian inference across a discrete set of MM candidate models. Each model Mj\mathcal{M}_j possesses a fixed parameter vector or hypothesis θj\theta_j (for example, a sensor misalignment vector or noise covariance), and is assigned an initial prior probability wj(0)w_j(0). At each epoch, state estimation is performed within the context of each model, followed by a model probability update via the Bayesian likelihood rule using the latest measurement. The core recursions for this Bayesian MMAE scheme are:

  • Parallel state propagation under each Mj\mathcal{M}_j: e.g., x^(j)(k)\hat x^{(j)}(k) via a Kalman filter variant with parameterization θj\theta_j.
  • Posterior model probability update:

wj(k)=wj(k1)p(ykMj)h=1Mwh(k1)p(ykMh)w_j(k) = \frac{w_j(k-1) \cdot p(y_k \mid \mathcal{M}_j)}{\sum_{h=1}^{M} w_h(k-1) \cdot p(y_k \mid \mathcal{M}_h)}

where p(ykMj)p(y_k \mid \mathcal{M}_j) is the likelihood under model jj and wj(k)w_j(k) is the normalized model probability at epoch kk (Ganganath et al., 26 Jul 2025, Ganganath et al., 3 Jan 2026, Ouyang et al., 2020).

2. MMAE Hypothesis Grid and Model Bank Construction

The discrete model set—or hypothesis grid—is central. In sensor misalignment estimation, the parameter space is typically discretized as a uniform grid. For a single star-tracker misalignment vector μRR3\boldsymbol\mu_R \in \mathbb{R}^3, the grid comprises N1×N2×N3N_1 \times N_2 \times N_3 points. Each μR(j)\boldsymbol\mu_R^{(j)} is mapped to a physical model (e.g., a fixed rotation represented by a small-angle quaternion). In the dual-parameter case (e.g., two star trackers), the grid is a tensor product over both parameters, resulting in N13×N23N_1^3 \times N_2^3 points, forming a 6D hypothesis grid (Ganganath et al., 3 Jan 2026). In application to odometer measurement modeling, the hypothesis grid can index alternative measurement noise covariance values, sensor error models, or other parameterizations (Ouyang et al., 2020).

Each hypothesis is paired with an independent estimator—commonly a Multiplicative Extended Kalman Filter (MEKF) for attitude and bias estimation (Ganganath et al., 26 Jul 2025, Ganganath et al., 3 Jan 2026) or a standard EKF for navigation (Ouyang et al., 2020). All estimators operate in parallel, advancing their state and residuals using the fixed hypothesis.

3. Bayesian Update and Likelihood Computation

At each step, model-conditioned measurements y^(j)\hat y^{(j)} are computed and residuals formed:

rj=yky^(j)r_j = y_k - \hat y^{(j)}

Assuming additive Gaussian noise with covariance RR, the likelihood under model Mj\mathcal{M}_j is

p(ykMj)=(2π)m/2 R1/2exp(12rjTR1rj)p(y_k | \mathcal{M}_j) = (2\pi)^{-m/2}\ |R|^{-1/2} \exp\left( -\frac{1}{2} r_j^T R^{-1} r_j \right)

where mm is the measurement vector dimension. The Bayesian update (sometimes presented using unnormalized weights w~j\tilde{w}_j) yields normalized model probabilities wj(k)w_j(k). In practice, the update is unaffected by the normalization constant since it divides out across all models. This likelihood structure applies to both simple scalar measurements and complex stacked residuals embedding attitude, rate, and star tracker line-of-sight errors (Ganganath et al., 26 Jul 2025, Ganganath et al., 3 Jan 2026, Ouyang et al., 2020).

4. State and Parameter Estimation Fusion

After updating model probabilities, MMAE fuses the outputs of the model-conditioned filters into a single state and parameter estimate. State fusion is typically done as a weighted average:

x^(k)=jwj(k)x^(j)(k)\hat x(k) = \sum_j w_j(k) \hat x^{(j)}(k)

P^(k)=jwj(k)[P(j)(k)+(x^(j)(k)x^(k))(x^(j)(k)x^(k))T]\hat P(k) = \sum_j w_j(k) \left[ P^{(j)}(k) + (\hat x^{(j)}(k) - \hat x(k)) (\hat x^{(j)}(k) - \hat x(k))^T \right]

Quaternion fusion for attitude estimates is performed using Markley’s method for weighted quaternion averaging: forming M=jwjq^(j)[q^(j)]TM = \sum_j w_j \hat q^{(j)} [\hat q^{(j)}]^T and selecting the eigenvector of MM with maximum eigenvalue (Ganganath et al., 26 Jul 2025, Ganganath et al., 3 Jan 2026). The final fused parameter estimate is θ^=jwjθj\hat \theta = \sum_j w_j \theta_j.

5. Hypothesis Diversity Metric and Adaptive Grid Refinement

A key innovation in recent MMAE research is grid refinement based on hypothesis diversity. Rather than relying on a single “winning” model (MAP), adaptive refinement is triggered using a diversity metric Ψ\Psi, defined as

Ψ=100×1N(j=1Nwj2)1\Psi = 100 \times \frac{1}{N} \left( \sum_{j=1}^N w_j^2 \right)^{-1}

so that Ψ=100%\Psi=100\% for uniform weights and Ψ0%\Psi \to 0\% as weights concentrate. When Ψ\Psi falls below a threshold (e.g., 10%10\%) and refinement levels remain, the hypothesis grid is re-centered at either the MAP or, preferably, the weight-mean estimate, and a finer-resolution grid is instantiated (Ganganath et al., 26 Jul 2025, Ganganath et al., 3 Jan 2026). All estimator states are re-initialized (to the MAP or fused state), and weights are reset uniformly. This dual-trigger, weighted-mean centered refinement strategy prevents premature grid collapse, ensures coverage of the parameter posterior, and robustly tracks misalignments at arcsecond-level accuracy.

The following table summarizes grid refinement triggers and their impact (Ganganath et al., 26 Jul 2025):

Refinement Trigger Avg. Refinements Final Misalignment RMSE (rad)
Classical (MAP center) 3.69 1.999×1041.999\times10^{-4}
Ψ\Psi-trigger (MAP center) 6.00 1.511×1041.511\times10^{-4}
Ψ\Psi-trigger (weighted-mean) 6.00 1.168×1041.168\times10^{-4}

Weighted-mean refinement yields a 41.6%41.6\% reduction in RMSE compared to classical triggers, and achieves arcsecond-level calibration (Ganganath et al., 26 Jul 2025).

6. MMAE in Attitude, Sensor Calibration, and Navigation Applications

MMAE has been successfully deployed for online calibration of star-tracker misalignment in deep space navigation (Ganganath et al., 26 Jul 2025, Ganganath et al., 3 Jan 2026). In such formulations, MMAE achieves simultaneous attitude, angular velocity, and gyro bias estimation, adapting to fixed sensor misalignments by maintaining a 3D (single tracker) or 6D (dual tracker) grid of misalignment hypotheses. Parallel 9-state MEKFs act as estimator cores, fed with TRIAD-based vector measurements and gyroscope data. Adaptive grid refinement is essential to constrain computational burden and target posterior concentration in the most likely parameter region.

In terrestrial navigation, MMAE is employed for robust measurement model selection and parameter estimation in INS/odometer integration. Multiple parallel EKFs model pulse-accumulation, pulse-increment, and pulse-derived velocity measurements, each possibly using a distinct measurement noise level or model error parameter. MMAE fusion substantially reduces mean position error and long-term drift rates versus a single-model EKF, with up to 45%45\% mean error reduction and 47×4-7\times drift slope reduction (Ouyang et al., 2020).

7. Robustness, Consistency, and Computational Tractability

Monte Carlo simulation demonstrates that MMAE-attached estimation architectures achieve statistically robust and consistent tracking of both states and discrete parameters. For spacecraft application, misalignment RMSE below 1.2×1041.2 \times 10^{-4} rad (24\approx 24 arcsec) and attitude error below 2×1032\times10^{-3} rad are typical, with all errors remaining within 3σ3\sigma bounds and estimation consistency verified empirically (Ganganath et al., 26 Jul 2025, Ganganath et al., 3 Jan 2026). The MMAE approach maintains computational tractability by adaptively focussing grid resolution where posteriors concentrate, enabling on-board, real-time implementation for resource-constrained platforms such as CubeSats.

This suggests that MMAE architectures, when equipped with appropriate diversity-driven adaptive refinement, can serve as practical and statistically robust solutions for autonomous in-flight calibration, complex model selection, and adaptive navigation under parameter uncertainty.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Bayesian Multiple-Model Adaptive Estimation (MMAE).