Papers
Topics
Authors
Recent
Search
2000 character limit reached

Importance Weighted VI

Updated 8 February 2026
  • Importance Weighted Variational Inference is a method that tightens the ELBO using multiple importance samples to yield a more accurate estimation of the marginal likelihood.
  • It employs techniques like sticking-the-landing (STL) and doubly-reparameterized gradients (DReG) to counteract gradient variance and the signal-to-noise ratio collapse.
  • Extensions such as hierarchical proposals, structured variational families, and deep ensembles broaden its applications in high-dimensional latent variable models.

Importance Weighted Variational Inference (IWVI) refers to a class of variational inference methods in which the traditional evidence lower bound (ELBO) is tightened using importance sampling with multiple samples. This approach, which originated with the Importance-Weighted Autoencoder (IWAE) framework, increases the accuracy of marginal likelihood estimation for latent variable models and has motivated a rich line of research extending both theoretical analysis and practical algorithms.

1. Foundations of Importance-Weighted Variational Inference

Given a latent-variable model pθ(x,z)=pθ(z)pθ(xz)p_\theta(x,z) = p_\theta(z) p_\theta(x|z) and a variational posterior qϕ(zx)q_\phi(z|x), the classical ELBO is: L(x)=Ezqϕ(zx)[logpθ(x,z)qϕ(zx)].\mathcal{L}(x) = \mathbb{E}_{z \sim q_\phi(z|x)} \left[ \log \frac{p_\theta(x, z)}{q_\phi(z|x)} \right]. This is a lower bound on the log marginal likelihood logpθ(x)\log p_\theta(x) due to Jensen’s inequality. Burda et al. (2015) replaced this single-sample expectation with a KK-sample importance sampling estimate: IWELBOK(x)=Ez1:Kqϕ[log1Kk=1Kpθ(x,zk)qϕ(zkx)].\mathrm{IWELBO}_K(x) = \mathbb{E}_{z_{1:K} \sim q_\phi} \left[ \log \frac{1}{K} \sum_{k=1}^K \frac{p_\theta(x, z_k)}{q_\phi(z_k|x)} \right]. IWELBOK(x)\mathrm{IWELBO}_K(x) forms a sequence of bounds that increase monotonically with KK and approach logpθ(x)\log p_\theta(x) as KK \rightarrow \infty (Finke et al., 2019, Domke et al., 2018). This tightening occurs because the log is applied to an unbiased estimator of pθ(x)p_\theta(x), and Jensen’s inequality is reversed in averaging over more samples.

Table: Hierarchy of Variational Bounds

Bound Equation Recoverable Special Cases
ELBO Eqϕ[logw(z)]\mathbb{E}_{q_\phi}[\log w(z)] K=1K=1 in IWELBO
IWELBO (IWAE) EqϕK[log(1/Kk=1Kw(zk))]\mathbb{E}_{q_\phi^{K}} [ \log (1/K \sum_{k=1}^K w(z_k) ) ] K>1K>1
VR-IWAE 11αEqϕK[log1Kk=1Kw(zk)1α]\frac{1}{1-\alpha} \mathbb{E}_{q_\phi^K} \left[\log \frac{1}{K} \sum_{k=1}^K w(z_k)^{1-\alpha} \right] α=0\alpha=0 is IWAE; K=1K=1 is VR (Daudel et al., 2022)

2. Algorithmic Implementations and Gradient Estimation

Optimization of IWVI objectives requires gradients with respect to the variational parameters ϕ\phi. For reparameterizable models, the standard estimator is: ϕIWELBOK=Eε1:K[k=1Kw~kϕlogw(f(εk;ϕ,x))],\nabla_\phi\, \mathrm{IWELBO}_K = \mathbb{E}_{\varepsilon_{1:K}} \left[ \sum_{k=1}^K \tilde w_k \nabla_\phi \log w \big(f(\varepsilon_k; \phi, x)\big) \right], where w~k\tilde w_k are the normalized importance weights. However, Rainforth et al. (2018) demonstrated that the signal-to-noise ratio (SNR) of this gradient estimator vanishes as KK increases, causing inference network gradients to become ineffective (Finke et al., 2019, Jiang et al., 4 Feb 2026).

Two prominent remedies are:

  • Sticking-the-landing (STL): Drops high-variance score-function terms, yielding a biased but low-variance estimator (Finke et al., 2019).
  • Doubly-reparameterized (DReG): Uses an identity to construct an unbiased, low-variance estimator: k=1Kw~k2ϕlogwk\sum_{k=1}^K \tilde w_k^{2} \nabla_\phi \log w_k This approach preserves unbiasedness while mitigating SNR collapse (Finke et al., 2019).

When reparameterization is not available (e.g., for discrete latents), REINFORCE-type (score-function) estimators are used; however, these also suffer from SNR decay as O(1/N)O(1/\sqrt{N}) as the sample size NN increases (Daudel et al., 1 Feb 2026). The VIMCO family of estimators introduced leave-one-out baselines to reduce variance, but recent analysis (e.g., VIMCO-\star) demonstrated that with optimal baselining, SNR can scale as N\sqrt{N} (Daudel et al., 1 Feb 2026).

Table: SNR Scaling Regimes

Gradient Estimator SNR behavior with KK Key References
Naive reparam O(1/K)O(1/\sqrt{K}) (vanishing) (Finke et al., 2019, Jiang et al., 4 Feb 2026)
STL/DReG O(const)O(\text{const}) or K\sqrt{K} under assumptions (Finke et al., 2019)
VIMCO O(1/N)O(1/\sqrt{N}) (Daudel et al., 1 Feb 2026)
VIMCO-\star O(N)O(\sqrt{N}) (Daudel et al., 1 Feb 2026)

In Bures-Wasserstein geometry, the Wasserstein natural gradient for IW-ELBO attains SNR scaling as Ω(K)\Omega(\sqrt{K}), outperforming the Euclidean parameterization for large KK (Jiang et al., 4 Feb 2026).

3. Extensions: Hierarchical, Structured, and Ensemble Variants

IWVI supports multiple extensions to hierarchical and structured models:

  • Hierarchical proposals: Rather than KK i.i.d. proposals, H-IWAE uses a "meta-sample" z0z_0 to induce negative correlation among conditionally independent proposals, reducing the estimator’s variance beyond the $1/K$ i.i.d. scaling (Huang et al., 2019).
  • Conditionally structured Gaussian approximations: Partitioning variables into global and local blocks, conditionally structured variational approximations exploit conditional independence—enabling scalable IWVI for, e.g., GLMMs and state space models (Tan et al., 2019).
  • Locally enhanced bounds: Importance weighting applied to blocks of latent variables separately (e.g., per data group) allows unbiased minibatch gradients and lower variance in hierarchical models (Geffner et al., 2022).
  • Multiple importance sampling ELBO (MISELBO): Uses deep ensembles of variational distributions to further tighten the bound, exploiting the Jensen-Shannon divergence between proposals for additional gain (Kviman et al., 2022).

In hierarchical models, the gap between true posterior and approximate inference is often dominated by variance in local blocks; thus, local IWVI or blockwise importance weighting is preferable to global IWVI (Geffner et al., 2022).

4. Generalizations and Theoretical Guarantees

Generalized objectives: Importance weighting can be seen as a specific path in the thermodynamic variational objectives (TVO) framework associated with “geometric mean” interpolations. This viewpoint leads to Hölder-bounded VI, which improves discretization error by flattening the local-evidence curve, yielding one-step bounds that are potentially much tighter than IW-ELBO for a matched compute budget (Chen et al., 2021).

Alpha-divergence (VR-IWAE): The VR-IWAE objective generalizes IWAE and Rényi-ELBOs, parameterized by α\alpha. This family smoothly interpolates between ELBO (α1\alpha\to1), IWAE (α=0\alpha=0), and Rényi (KK \to \infty at fixed α\alpha). For 0<α<10 < \alpha < 1, the VR-IWAE achieves strictly better SNR in the encoder gradients than IWAE, at the cost of a bias governed by α\alpha (Daudel et al., 2022, Daudel et al., 2024). As dimension increases, however, KK must scale exponentially in dd to avoid weight collapse, limiting practical gains in high dimensions.

Asymptotics: With mild regularity, maximizing IWELBO as both the number of importance samples and dataset size NN grow yields consistent, asymptotically normal estimators attaining the statistical efficiency of maximum likelihood when the sample size grows faster than Nδ/2N^{\delta/2}, with δ\delta depending on higher moments of the importance weights (Cherief-Abdellatif et al., 14 Jan 2025). In practice, K5K \approx 5–20 is sufficient for most gains if importance weight variance is moderate (Tan et al., 2019, Huang et al., 2019).

5. Variance Reduction, Robustness, and Practical Considerations

Naïve IWVI gradient estimators are subject to severe variance issues due to score-function terms and high-dimensional weight collapse. Recent developments address this through:

  • Variance-reduced U-statistics: Overlapping batch averages of the base gradient estimator (using U-statistics) provably lower variance, with efficient computational approximations available and empirical reductions in wall-clock variance and improved performance in IWAEs (Burroni et al., 2023).
  • Antithetic and hierarchical proposals: Negative correlation in hierarchical proposals (e.g., H-IWAE) can reduce variance strictly below i.i.d. levels (Huang et al., 2019).
  • Elliptical variational families: Employing heavy-tailed or elliptical qq can improve moment matching and robustness in both low and high dimensions (Domke et al., 2018).
  • Deep ensembles: Utilizing ensembles of variational proposals with multiple importance sampling (MISELBO) outperforms single-model IWVI at fixed computational budget, as quantified in MNIST and phylogenetic inference experiments (Kviman et al., 2022).

6. Empirical Applications and Domains

IWVI approaches have achieved state-of-the-art results in modern deep generative modeling (e.g., VAEs for image data), latent variable models, state-space models, deep Gaussian Processes, generalized linear mixed models, and combinatorial structures (e.g., Bayesian phylogenetics) (Huang et al., 2019, Salimbeni et al., 2019, Tan et al., 2019, Daudel et al., 1 Feb 2026). Extensions such as Annealed Importance Sampling Variational Inference (AIS-VI) further bridge the gap between VI and MCMC, providing even tighter bounds and better density estimation than IWAE for a given compute budget (Ding et al., 2019).

7. Limitations and Open Directions

Despite its tightness, IWVI suffers from fundamental high-dimensional collapse: as dimensionality increases, unless KK grows exponentially, the normalized importance weights concentrate on a single sample, and tighter bounds yield no improvement in SNR or parameter estimation. This limits the effectiveness of large KK in complex, high-dimensional models and motivates further work on variance reduction, alternative geometries (Wasserstein/Bures), and adaptive proposal design (Daudel et al., 2022, Jiang et al., 4 Feb 2026, Daudel et al., 2024).

Ongoing developments include sharper nonasymptotic analyses, efficient exploitation of conditional independence in hierarchical architectures, adaptive weighting strategies, and hybridization with MCMC (Chen et al., 2021, Ding et al., 2019). The unification of IWVI with more general divergence minimization and information-geometric optimization (e.g., Wasserstein gradients) points toward robust, high-SNR algorithms for future deep probabilistic modeling (Jiang et al., 4 Feb 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Importance Weighted Variational Inference (VI).