SeMIS: Sequential Multiple Importance Sampling
- SeMIS is a computational framework that combines multiple proposal distributions to efficiently estimate integrals and perform Bayesian inference.
- It utilizes mixture estimators, sequential adaptation, and convex optimization to achieve notable variance reduction and robustness in challenging sampling scenarios.
- Empirical results show significant gains, with up to 22× variance reduction and improved effective sample sizes in applications like rare event simulation and structural model updating.
Sequential Multiple Importance Sampling (SeMIS) is a class of algorithms designed to efficiently estimate integrals and perform Bayesian inference by sequentially combining samples from multiple adaptive proposal distributions. The SeMIS family achieves lower variance and greater robustness than classical importance sampling, leveraging mixture-based estimators, sequential adaptation, and, in some variants, convex optimization for control variate coefficients and mixture weights. Its main applications include rare event simulation, high-dimensional evidence estimation, and uncertainty quantification for complex multimodal or singular integrands.
1. Formal Definition and Problem Setup
SeMIS aims to compute expectations or integrals of the form
where is a nominal or target distribution (which may be the posterior in Bayesian settings), and is a test or payoff function that can be irregular or highly concentrated (e.g., in rare-event scenarios).
Given or more proposal densities , each absolutely continuous with respect to , SeMIS constructs a mixture density
Samples may be drawn IID from , or generated via stratified allocation according to the .
In high-dimensional Bayesian evidence estimation, SeMIS builds a sequence of proposals interpolating from the prior to a softly truncated likelihood-weighted prior, e.g.,
with a hyperparameter sequence , following the approach in (Binbin et al., 7 Jul 2025).
2. Core Estimators and Importance Weights
SeMIS uses mixture importance sampling (MIS) estimators of the form
or, in the sequential context,
where may use either the balance heuristic (summing over all proposal densities seen so far) or discarding–reweighting in which earlier poor proposals are assigned zero weight (Thijssen et al., 2018).
For Bayesian inference, the evidence is estimated as
with all generated samples entering the estimator, weighted by their normalized contributions (Binbin et al., 7 Jul 2025).
3. Variance Reduction, Regret Bounds, and Control Variates
SeMIS exploits variance reduction via both optimal proposal mixture weighting and (where applicable) control variates. The variance of the MIS estimator is (He et al., 2014):
A general regret bound holds: for any mixture and the optimal control variate for that mixture,
implying that the uniform mixture () suffers at most a factor increase in variance compared to the best single proposal, but can be overly conservative when and only a few are effective.
Optimal choice of mixture probabilities and control variate coefficients is enabled by the joint convexity of the variance in , and can be practically computed by convex optimization (He et al., 2014).
4. Sequential and Adaptive Algorithmic Frameworks
Distinct SeMIS variants exist, with common elements:
- Two-stage optimization and refinement: A pilot stage samples from an initial mixture (often uniform), then fits mixture weights and control variates via convex optimization; the main stage draws further samples from the optimized mixture for the final estimator (He et al., 2014).
- Sequential adaptation: The proposal at round is adapted using information from all past samples and weights, followed by drawing new samples and constructing appropriate importance weights via either the full balance heuristic or discarding of poor early proposals (Thijssen et al., 2018).
- Soft truncation for multimodal posteriors: In high-dimensional Bayesian inference, intermediate proposals interpolate between prior and posterior using softly truncated priors, maintaining tail connectivity for effective mode mixing (Binbin et al., 7 Jul 2025).
- Weight computation: Mixture-type estimators use balance heuristic weights, while discarding–reweighting reduces computational overhead by discarding earlier samples and focusing on recent proposals.
A representative pseudocode structure for SeMIS (Binbin et al., 7 Jul 2025):
- Draw samples from prior or initial proposal.
- Adaptively construct proposal sequence by tuning truncation parameters for specified acceptance probabilities.
- For each stage:
- Seed new proposals using accepted samples from the previous stage.
- Generate new samples by MCMC (e.g., elliptical slice sampling).
- Update importance weights via balance heuristic.
- Aggregate estimates using all samples and stage-specific weights.
- Optionally, resample from the pooled set for approximate posterior draws.
5. Computational Cost and Consistency
SeMIS algorithms differ in per-iteration cost:
- The balance heuristic requires operations per round (with new samples and total rounds), totaling .
- Discarding–reweighting reduces this to per round, i.e., overall, by keeping only a subset of the sample blocks and updating denominators incrementally (Thijssen et al., 2018).
SeMIS estimators remain consistent under both approaches: under mild regularity conditions (e.g., dominated targets, bounded moments), the estimator converges almost surely to the desired expectation as total sample size grows. This holds with fixed or adaptive discarding schedules, provided the number of retained samples diverges (Thijssen et al., 2018).
6. Empirical Performance and Applications
SeMIS achieves substantial gains in variance reduction and effective sample size:
- In integrals with singularities or rare events, tailored mixture weights and control variates yield variance reduction factors up to (optimized mixture, no CV) or (optimized mixture plus CV) vs. plain Monte Carlo; uniform mixture yields no improvement (He et al., 2014).
- In high-dimensional, multimodal Bayesian inference (e.g., Eggbox and Gaussian-shells problems, up to 20D), SeMIS yields lowest bias (<1%), lowest coefficient of variation (0.1–1.5%), and the best K–S statistics for posterior marginals compared to subset simulation (SuS) and adaptive BUS (aBUS). Effective sample size per likelihood evaluation is frequently doubled or tripled relative to comparators (Binbin et al., 7 Jul 2025).
- In practical engineering applications such as finite element model updating, SeMIS localizes structural stiffness loss and quantifies uncertainty even under incomplete measurement scenarios, revealing multimodal posteriors when data do not uniquely determine the parameters (Binbin et al., 7 Jul 2025).
- In diffusion process simulation, optimized discarding-AMIS matches the effective sample size of full balance-AMIS but with an order of magnitude less CPU time (Thijssen et al., 2018).
7. Practical Guidance and Theoretical Significance
For practitioners:
- A budget of several hundred samples per stage, level probability , and balance heuristic weights is generally effective.
- Convex optimization for mixture weights and control variates should be performed with safety lower bounds on weights and, if needed, marginal relaxation to ensure feasibility.
- For Bayesian updating with high-dimensional or multimodal posteriors, softly truncated proposals and adaptive resampling via SeMIS are recommended; elliptical-slice MCMC in whitened coordinates is a robust default kernel (Binbin et al., 7 Jul 2025).
- Monitoring variance, effective sample size, and estimator stability enables adaptive stopping.
The theoretical significance of SeMIS lies in its convex-analytic foundation for variance minimization, its formal consistency guarantees under broad conditions, and its analytical regret bounds relative to unknown optimal proposals (He et al., 2014, Thijssen et al., 2018). Its sequential, adaptive structure enables scalable and robust inference in high-dimensional, multimodal, and rare-event regimes, making it a core tool for advanced Monte Carlo computation.