Papers
Topics
Authors
Recent
Search
2000 character limit reached

Self-Normalized Pseudo-Posterior

Updated 10 February 2026
  • Self-normalized pseudo-posterior is a discrete probability measure derived from SNIS that enables inference in settings with intractable normalizing constants.
  • It generalizes the SNIS estimator by coupling proposal distributions to reduce variance and bias in expectation estimation.
  • Practical algorithms using self-normalized pseudo-posteriors enhance computational efficiency in Bayesian prediction and models with doubly intractable normalizing constants.

A self-normalized pseudo-posterior refers to a discrete, data-dependent probability measure arising naturally from self-normalized importance sampling (SNIS) and closely related procedures, particularly in settings where direct use of the (normalized) posterior is impeded by intractable normalizing constants. Through the mechanism of weight normalization, the SNIS procedure induces a random atomic distribution—termed the pseudo-posterior—supported on the sampled proposals. This construction provides a foundation for statistical inference (e.g., quantiles, credible regions) directly from importance samples, and underlies generalizations involving couplings, bias-reduction, and scalable approximations in complex models.

1. Foundations: SNIS and the Pseudo-Posterior

Given a target expectation Eπ[f(x)]=f(x)π(x)dx\mathbb{E}_\pi[f(x)] = \int f(x) \, \pi(x) dx with π(x)\pi(x) only known up to normalization, SNIS approximates the expectation using independent proposal draws %%%%2%%%% via

I^=i=1Nwif(xi)i=1Nwi,wi=π~(xi)q(xi)\widehat I = \frac{\sum_{i=1}^N w_i f(x_i)}{\sum_{i=1}^N w_i}, \qquad w_i = \frac{\tilde\pi(x_i)}{q(x_i)}

where π~(x)π(x)\tilde\pi(x) \propto \pi(x) is unnormalized. The normalized weights wˉi=wi/jwj\bar w_i = w_i / \sum_j w_j define a discrete distribution

π^SNIS(x)=i=1Nwˉiδxi(x)\hat\pi_{\rm SNIS}(x) = \sum_{i=1}^N \bar w_i \, \delta_{x_i}(x)

on the set {x1,,xN}\{x_1, \dots, x_N\}, called the self-normalized pseudo-posterior (Cardoso et al., 2022). This measure provides a proxy for the target posterior and enables inference on arbitrary functionals as empirical averages under π^SNIS\hat\pi_{\rm SNIS} (Branchini et al., 2024).

2. Ratio-of-Integrals Perspective and Generalization

The SNIS estimator can be interpreted as an empirical approximation to a ratio of intractable integrals,

μ=Z1Z2=(x)ρ(x)dxρ(x)dx\mu = \frac{Z_1}{Z_2} = \frac{\int \ell(x) \rho(x) dx}{\int \rho(x) dx}

where, in Bayesian applications, ρ\rho is the unnormalized posterior and \ell is the integrand of interest (e.g., ff). The standard SNIS uses the same set of proposals for both numerator and denominator.

Recent methodological advances generalize this by introducing joint sampling schemes Q(x,y)Q(x, y) in an extended proposal space, where marginal q1(x)q_1(x) and q2(y)q_2(y) are separately adapted for Z1Z_1 and Z2Z_2 estimation, respectively. The self-normalized pseudo-posterior is now implicitly indexed by both marginals and the coupling structure between them (Branchini et al., 2024). This two-marginal, coupled approach enables variance reduction unattainable by classical SNIS.

3. Couplings and Adaptive Two-Stage Schemes

A key innovation is constructing the joint proposal Q(x,y)Q(x, y) via couplings (joint distributions with prescribed marginals on the unit hypercube). Specific transport maps T1T_1, T2T_2 push uniform samples through to the desired marginals, and the coupling C(u,v)C(u, v) on [0,1]2d[0, 1]^{2d} encodes dependency structure. Parameterizing and learning this coupling facilitates adaptive control of the correlation between the numerator and denominator estimates of the self-normalized estimator.

The typical workflow consists of two stages:

  • Marginal adaptation: Learn q1q_1, q2q_2 using AIS or VI to approximate optimal proposals.
  • Coupling adaptation: Fix marginals and optimize the coupling (e.g., via copula families or antithetic constructions) to minimize the estimator's variance. This can be accomplished using stochastic gradient procedures on suitable objective functionals of the weights (Branchini et al., 2024).

4. Statistical Properties: Bias, Variance, and Consistency

The self-normalized pseudo-posterior induces a biased, but consistent, estimator of Eπ[f]\mathbb{E}_\pi[f]. For proposals qq with finite weight moments, the bias and variance are controlled as

E[π^N[f]]π(f)12N1κq,π,Var(π^N[f])4N1κq,π|E[\hat\pi_N[f]] - \pi(f)| \le 12 N^{-1} \kappa_{q, \pi}, \quad \text{Var}(\hat\pi_N[f]) \le 4 N^{-1} \kappa_{q, \pi}

where κq,π=Eq[w2]/(Eq[w])2\kappa_{q,\pi} = E_q[w^2] / (E_q[w])^2 (Cardoso et al., 2022). As NN \to \infty, the estimator is asymptotically unbiased and normal, but for fixed NN the so-called "variance floor" characteristic of SNIS remains.

Variance can be further decomposed in the two-marginal, coupled setting: μ2VarQ[μ^]=χ2(q1q1)+χ2(q2q2)2(C(Q)1)\mu^{-2}\, \mathrm{Var}_Q^\infty[\widehat\mu] = \chi^2(q_1^* \| q_1) + \chi^2(q_2^* \| q_2) - 2(\mathcal{C}(Q) - 1) where q1(x)(x)ρ(x)q_1^*(x) \propto \ell(x)\rho(x) and q2(y)ρ(y)q_2^*(y) \propto \rho(y) are optimal marginals, and C(Q)\mathcal{C}(Q) is the coupling effect. Appropriately tuned couplings can yield significant variance reduction (Branchini et al., 2024).

5. Practical Algorithms and Bias Reduction

Self-normalized pseudo-posterior procedures are central to scalable Monte Carlo inference:

  • In classical SNIS, pseudo-posterior expectations are obtained as weighted averages over the atomic measure.
  • Coupled and adaptive SNIS algorithms first optimize proposals, then couple samples to achieve minimum variance in ratio estimators (Branchini et al., 2024).
  • The BR-SNIS algorithm applies Markovian recycling (i-SIR chains) to yield a bias-reduced pseudo-posterior estimate with negligible additional variance and substantially improved finite-sample bias (Cardoso et al., 2022).

Algorithmic implementations exploit normalization-invariant updates and Markov chain recycling to balance statistical efficiency and computational cost (see pseudocode in (Cardoso et al., 2022, Branchini et al., 2024)).

6. Applications and Empirical Performance

Self-normalized pseudo-posterior frameworks are widely deployed:

  • In Bayesian prediction and posterior predictive density estimation, variance-reduced coupled SNIS yields mean-squared error reductions by 2–3 orders of magnitude relative to classical SNIS and two-proposal independent methods, especially in high dimension or under model misspecification (Branchini et al., 2024).
  • In neural language modeling, self-normalized pseudo-posteriors enable O(K)O(K) cost training (versus O(C)O(C) for softmax normalization), with minimal impact on perplexity or word error rate (Yang et al., 2021). The empirical pseudo-posterior drives the cross-entropy loss and eliminates the need for additional bias corrections.
  • In models with doubly intractable normalizing constants, such as ERGMs, pseudo-posteriors constructed from tractable pseudolikelihoods may be further calibrated to match the target's mode and curvature, providing samples with accurate marginal inference and computational costs orders of magnitude below exchange algorithms (Bouranis et al., 2015).

7. Significance and Outlook

The self-normalized pseudo-posterior undergirds a general paradigm for inference under intractability: transforming approximations via normalization, adapting proposal and coupling structure, and supporting empirical measures for downstream inference. This approach offers consistency, modularity for integration with adaptive/variational methods, and tractable solutions for both Monte Carlo and large-scale variational objectives. Empirical studies demonstrate substantial gains in statistical and computational efficiency, particularly when conventional normalization or marginalization is prohibitive (Branchini et al., 2024, Cardoso et al., 2022, Yang et al., 2021, Bouranis et al., 2015). Future work may further integrate these constructions with normalizing flows, energy-based models, and nonparametric surrogates for broader classes of simulation-based inference and doubly intractable models.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Self-Normalized Pseudo-Posterior.