Papers
Topics
Authors
Recent
Search
2000 character limit reached

Misspecified Bayesian Cramér–Rao Bound (MBCRB)

Updated 30 December 2025
  • MBCRB is a generalized Bayesian bound that quantifies estimator performance in the presence of model mismatch using pseudotrue parameter mapping.
  • It leverages Bayesian Fisher information and covariance–score identities to derive rigorous lower bounds on estimator covariance, with closed-form expressions for linear–Gaussian models.
  • MBCRB provides practical insights for model validation, selection, and estimator design by directly incorporating prior information and bias effects due to misspecification.

The Misspecified Bayesian Cramér–Rao Bound (MBCRB) generalizes the classical Bayesian Cramér–Rao bound (BCRB) to address situations where the working statistical model used by an estimator does not coincide with the true data-generating mechanism. The MBCRB quantifies the achievable estimation performance in parametric model-mismatch settings by introducing the concepts of pseudotrue parameter mapping and modified information measures, allowing rigorous lower bounds on estimator covariance that directly account for both assumed model and prior information. This formalism is applicable to a broad context and admits closed-form expressions for linear–Gaussian problems, supporting both model validation and selection (Tang et al., 2023).

1. True Versus Assumed Models and Model Mismatch

Fundamental to the construction of MBCRB is the recognition of two distinct statistical models: the "true" data-generation model and the "assumed" estimator model. The true model is specified by the joint pdf p(x,ψ)=p(xψ)p(ψ)p_{*}(x, \psi) = p_{*}(x|\psi) p(\psi), with latent parameter ψΨRnψ\psi \in \Psi \subset \mathbb{R}^{n_\psi} and data xRnxx \in \mathbb{R}^{n_x}. In contrast, the assumed model adopts a potentially misspecified joint pdf f(x,θ)=f(xθ)π(θ)f(x, \theta) = f(x|\theta)\pi(\theta) with parameter θΘRnθ\theta \in \Theta \subset \mathbb{R}^{n_\theta}, where Ψ\Psi and Θ\Theta need not coincide, nor must pp_{*} agree with ff. Model mismatch arises whenever f(x,θ)p(x,ψ)f(x,\theta) \neq p_{*}(x,\psi), a practically ubiquitous scenario in statistical inference where simplified or incorrect parametric choices are made.

Key to MBCRB is the "pseudotrue parameter" mapping θ0(ψ)\theta_0(\psi), defined as the minimizer in Kullback–Leibler divergence of the assumed joint model f(x,θ)f(x, \theta) from the true conditional likelihood p(xψ)p_{*}(x|\psi) under the true prior on ψ\psi:

θ0(ψ)=argminθΘD[p(xψ)f(x,θ)]=argminθExψ[lnf(x,θ)].\theta_0(\psi) = \arg\min_{\theta \in \Theta} \mathcal{D}[p_{*}(x|\psi) \Vert f(x,\theta)] = \arg\min_\theta -\mathbb{E}_{x|\psi}[\ln f(x,\theta)].

This mapping identifies, for each true value ψ\psi, the θ\theta that best approximates p(xψ)p_{*}(x|\psi) in the KL sense.

2. Derivation of the MBCRB and Core Assumptions

The MBCRB lower bounds the covariance of any misspecified-unbiased (MS-unbiased) estimator θ^(x)\hat{\theta}(x), satisfying:

Exψ[θ^(x)]=θ0(ψ)ψ.\mathbb{E}_{x|\psi}[\hat{\theta}(x)] = \theta_0(\psi) \quad \forall\, \psi.

Under likelihood regularity and suitable prior boundary conditions (e.g., prior vanishes at the boundary of Θ\Theta ensuring integration by parts is well-defined), the main tool is the Bayesian Fisher information matrix (BFIM) of θ\theta:

J=Ex,θ[θlnf(x,θ)θlnf(x,θ)]=JD+JP,J = \mathbb{E}_{x, \theta} \big[ \nabla_\theta \ln f(x,\theta) \nabla_\theta \ln f(x,\theta)^\top \big ] = J_D + J_P,

with JDJ_D from the likelihood and JPJ_P from the prior.

The derivation leverages covariance–score identities and the Cauchy–Schwarz inequality to establish, for any aRnθa \in \mathbb{R}^{n_\theta} and bRnψb \in \mathbb{R}^{n_\psi},

aCovx,ψ[θ^(x)θ0(ψ)]abJb[aEψ[θ0(ψ)/ψ]b]2.a^\top\, \mathrm{Cov}_{x,\psi}[\hat{\theta}(x)-\theta_0(\psi)]\, a \cdot b^\top J b \geq [a^\top \mathbb{E}_\psi[\partial \theta_0(\psi) / \partial \psi] b]^2.

Optimal choice of bb yields the matrix MBCRB inequality:

Covx,ψ[θ^(x)θ0(ψ)]AJ1A,\mathrm{Cov}_{x,\psi}[\hat{\theta}(x)-\theta_0(\psi)] \succeq A J^{-1} A^\top,

where A=Eψ[θ0(ψ)/ψ]A = \mathbb{E}_\psi[ \partial \theta_0(\psi) / \partial \psi ] is the Jacobian-averaged matrix. In expanded form,

Ex,ψ[[θ^(x)θ0(ψ)][θ^(x)θ0(ψ)]]AJ1A0.\mathbb{E}_{x,\psi} \left[ [\hat{\theta}(x) - \theta_0(\psi)] [\hat{\theta}(x) - \theta_0(\psi)]^\top \right] - A J^{-1} A^\top \succeq 0.

When the model is correctly specified (θ0(ψ)=ψ\theta_0(\psi) = \psi, A=IA = I), MBCRB reduces to the standard BCRB: Cov[θ^ψ]J1\mathrm{Cov}[\hat{\theta}-\psi] \succeq J^{-1}.

3. Closed-Form MBCRB in Linear–Gaussian Models

In the case of linear–Gaussian systems, closed-form MBCRB expressions can be derived, illustrating its concrete implementation. The true model specifies ψRd\psi \in \mathbb{R}^d with prior ψN(μψ,Σψ)\psi \sim \mathcal{N}(\mu_\psi, \Sigma_\psi) and observations xnψN(Hψ,Σ)x_n|\psi \sim \mathcal{N}(H_* \psi, \Sigma_*). The assumed model posits θRnθ\theta \in \mathbb{R}^{n_\theta}, prior θN(μθ,Σθ)\theta \sim \mathcal{N}(\mu_\theta, \Sigma_\theta), and xnθN(Hθ,Σ)x_n|\theta \sim \mathcal{N}(H\theta, \Sigma).

For this setting:

  • The pseudotrue parameter is given by

θ0(ψ)=[NHΣ1H+Σθ1]1[NHΣ1Hψ+Σθ1μθ].\theta_0(\psi) = [N H^\top \Sigma^{-1} H + \Sigma_\theta^{-1}]^{-1} [N H^\top \Sigma^{-1} H_* \psi + \Sigma_\theta^{-1} \mu_\theta].

  • The constant Jacobian is

A=[NHΣ1H+Σθ1]1NHΣ1H.A = [N H^\top \Sigma^{-1} H + \Sigma_\theta^{-1}]^{-1}\, N H^\top \Sigma^{-1} H_*.

  • The BFIM is

J=NHΣ1H+Σθ1.J = N H^\top \Sigma^{-1} H + \Sigma_\theta^{-1}.

  • The MBCRB adopts the sandwich form

Covx,ψ[θ^θ0]AJ1A.\mathrm{Cov}_{x,\psi}[\hat{\theta} - \theta_0] \succeq A J^{-1} A^\top.

  • For bounding MSE relative to the true parameter, a bias term is added:

Cov[θ^ψ]AJ1A+Eψ[r(ψ)]Eψ[r(ψ)],\mathrm{Cov}[\hat{\theta} - \psi] \succeq A J^{-1} A^\top + \mathbb{E}_\psi [r(\psi)] \mathbb{E}_\psi [r(\psi)]^\top,

with r(ψ)=θ0(ψ)ψr(\psi) = \theta_0(\psi) - \psi.

The maximum a posteriori estimator θ^MAP(X)\hat{\theta}_{\text{MAP}}(X) under the assumed model is MS-unbiased and asymptotically attains the MBCRB.

4. Asymptotic Tightness, Special Cases, and Role of Mismatch

The MBCRB is asymptotically achieved by estimators such as the misspecified MAP (or QMLE+prior) under standard regularity conditions. Its tightness often persists in finite sample regimes where the standard BCRB can fail to provide meaningful bounds under misspecification.

In the special case of correct specification—where model parameters, priors, and measurement matrices align (Σθ=Σψ\Sigma_\theta=\Sigma_\psi, H=HH = H_*, μθ=μψ\mu_\theta = \mu_\psi)—the pseudotrue mapping satisfies θ0(ψ)=ψ\theta_0(\psi) = \psi, the Jacobian A=IA = I, and MBCRB collapses to the familiar J1J^{-1}.

Model mismatch—differences between HH and HH_*, and/or Σ\Sigma and Σ\Sigma_*—is explicitly captured in both the matrix AA and the bias term r(ψ)r(\psi). Severe mismatch leads to a growing divergence between θ0(ψ)\theta_0(\psi) and ψ\psi, inflating both the MBCRB covariance bound and bias penalty above the correctly specified baseline (Tang et al., 2023).

5. Practical Implications for Bayesian Estimation and Model Selection

MBCRB addresses the need for rigorous performance benchmarks in real-world estimation tasks where model parameters are typically only approximately known or chosen for tractability. By providing a closed-form, attainable lower bound on estimator covariance (including prior effects and misspecification consequences), MBCRB serves as a quantitative design and validation tool. Model selection methodologies are thereby supported: competing models may be compared directly by evaluating their MBCRBs, facilitating risk assessment and optimization under practical constraints.

The MBCRB thus extends standard Bayesian inference theory, enabling the evaluation of estimation accuracy even in the presence of persistent model misspecification by means of analytically tractable and tight lower bounds.

6. Summary and Outlook

The Misspecified Bayesian Cramér–Rao Bound generalizes the BCRB framework to accommodate parametric model mismatch. It introduces the KL-optimal pseudotrue parameter mapping, utilizes a modified information matrix, and yields a transparent matrix inequality whose form reflects the extent of model mismatch and prior selection. For linear–Gaussian systems, fully closed-form formulas are available, supporting both theoretical analysis and practical estimator design in Bayesian signal processing and statistical inference (Tang et al., 2023). A plausible implication is that the MBCRB provides foundational guidance for estimator selection and model engineering whenever precise modeling of the data-generating process is infeasible.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Misspecified Bayesian Cramér-Rao Bound (MBCRB).