Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 104 tok/s
Gemini 3.0 Pro 36 tok/s Pro
Gemini 2.5 Flash 133 tok/s Pro
Kimi K2 216 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

i-nessai: Advanced Bayesian Nested Sampling

Updated 10 November 2025
  • i-nessai is an advanced nested sampling algorithm that uses normalising flows and importance sampling to achieve efficient and unbiased Bayesian inference in complex, high-dimensional settings.
  • It employs batch updates and a meta-proposal mixture to bypass traditional rejection sampling and significantly reduce computational overhead and likelihood evaluations.
  • The algorithm supports parallel likelihood computation and integrates seamlessly with hierarchical Bayesian models, enhancing applications in gravitational-wave and PTA data analyses.

i-nessai is an advanced nested sampling algorithm that integrates normalising flows and importance sampling to accelerate Bayesian inference, particularly in high-dimensional, multimodal problems where likelihood evaluation is computationally expensive. Originating as an evolution of the nessai sampler, i-nessai eliminates the need for rejection sampling to the prior, supports efficient batch updates, and achieves unbiased evidence estimation and posterior inference with substantial reductions in likelihood calls and runtime.

1. Mathematical Foundations of Importance Nested Sampling with Normalising Flows

Standard nested sampling constructs the Bayesian evidence integral

Z=L(θ)π(θ)dθ,Z = \int L(\theta)\,\pi(\theta)\,d\theta,

by iteratively removing low-likelihood "live points" from the prior-constrained region and updating the evidence with weights based on the shrinkage of prior mass. Traditional approaches enforce strict likelihood ordering and i.i.d. draws from the prior.

i-nessai generalizes this process by drawing candidate samples from a meta-proposal distribution

Q(θ)=j=1Nαjqj(θ),Q(\theta) = \sum_{j=1}^N \alpha_j\,q_j(\theta),

where each qjq_j is a normalising flow trained to match prior-constrained regions at earlier iterations, and the αj\alpha_j are meta-proposal weights proportional to batch sizes or expected evidence contributions. The sampler relaxes the i.i.d. prior constraint, allowing points to be added out of order and in batches. Each sample θi\theta_i is assigned an importance weight,

w(θi)=π(θi)Q(θi),w(\theta_i) = \frac{\pi(\theta_i)}{Q(\theta_i)},

so that the overall evidence estimate is

Z^=1Ntoti=1NtotL(θi)w(θi).\hat{Z} = \frac{1}{N_{\mathrm{tot}}} \sum_{i=1}^{N_{\mathrm{tot}}} L(\theta_i)\,w(\theta_i).

Batch updates to the sample pool and mixture distribution are possible, and unbiased variance estimates for Z^\hat{Z} follow directly from importance sampling theory. This approach enables evidence accumulation without requiring samples to be ordered by likelihood value, directly supporting parallelized likelihood calculation and non-standard prior models (Williams et al., 2023).

2. Architecture and Training of Normalising Flows

The normalising flows in i-nessai are invertible neural network-based transformations capable of learning highly non-trivial densities. The Real-NVP architecture is employed, comprising multiple affine coupling blocks (8 blocks, 4 layers, 96 neurons shown optimal in PTA applications), where each coupling layer partitions the input vector and parametrizes conditional updates via small neural networks. Training is performed using maximum likelihood on the current set of live points under the constraint L(θ)>tj\mathcal{L}(\theta)>t_j, essentially minimizing

Lflow=1Nliveiw(θi)logqj(θi),\mathcal{L}_{\text{flow}} = -\frac{1}{N_{\text{live}}}\sum_{i} w(\theta_i)\,\log q_j(\theta_i),

where w(θi)w(\theta_i) are reweighting factors, ensuring the flow approximates the constrained prior efficiently (Williams et al., 2023, D'Amico et al., 5 Nov 2025).

Each new normalising flow augments the mixture proposal QQ, and batch sampling occurs directly from these flows rather than via rejection or MCMC, offering dramatically improved acceptance rates in high-dimensional, correlated posteriors. Typically, after flow retraining, fresh candidate draws are generated in bulk, evaluated in parallel, and weighted according to the mixture proposal density, further reducing sampling overhead (Villa et al., 3 Nov 2025).

3. Sampling Procedure and Algorithmic Steps

The core i-nessai loop is structured as follows, reflecting the capabilities described in the literature:

1
2
3
4
5
6
7
8
9
10
11
Initialize N_live samples θ_j from π(θ)
Set meta-proposal Q_1  π
while evidence remaining above tolerance:
    1. Discard fraction ρ of samples (weighted quantile or entropy rule)
    2. Define likelihood threshold t_j from discarded samples
    3. Train flow q_j(θ) on current live set above t_j
    4. Update batch size n_j and meta-proposal weights α_j
    5. Draw n_j new points θ ~ q_j; evaluate L(θ); assign weights w(θ)
    6. Update meta-proposal Q, evidence estimate, and N_tot
    7. Repeat until stopping criterion (remaining live evidence < τ)
Postprocess: Optionally redraw samples from final Q for unbiased evidence/posterior estimation
This procedure allows for batch addition of samples, and all points are retained (even if drawn "leaky"), which is corrected by importance weighting in the final evidence and posterior computation (Williams et al., 2023). The process supports full parallelization in likelihood computation, and the meta-proposal can accommodate highly nontrivial, multimodal priors arising in hierarchical Bayesian models (Villa et al., 3 Nov 2025, D'Amico et al., 5 Nov 2025).

4. Performance, Validation, and Computational Benchmarks

i-nessai achieves pronounced efficiency gains compared to both standard nested sampling and parallel-tempering MCMC. For analytic likelihoods (Gaussian and multi-modal mixtures) up to 32-dimensional spaces, i-nessai produces unbiased evidence estimates within Monte Carlo error, displaying smaller bias after post-resampling. In gravitational-wave parameter estimation, i-nessai demonstrates factors of 2.68× (vs. nessai) and 13.3× (vs. dynesty) fewer likelihood calls for binary black hole signals, and up to 15× faster walltimes for long-duration neutron star signals with reduced-order quadrature (Williams et al., 2023).

In PTA data analysis, effective sample rates improve by three orders of magnitude relative to PTMCMC, with scaling of effective sampling rates vs. dimension (S ≈ 0.34 for 1→3-pulsar offset, compared to S ≈ 0.11 for PTMCMC), and empirical error bars in log-evidence estimates as small as 103≲10^{-3} for runs with 50+ parameters (Villa et al., 3 Nov 2025). The normalising flow architecture can be tuned to the problem dimensionality; a mid-sized flow (8 blocks, 4 layers, 96 neurons) provided optimal ESS per CPU hour in 10-pulsar setups.

Parallelization is supported natively: likelihood evaluation is distributed over worker pools, and flow training uses PyTorch threading. For 52-dimensional PTA runs, n_pool=12 and n_threads=1 provided a 5× speed-up over single-core runs, with flow training overheads amortized over the entire nested-sampling process (Villa et al., 3 Nov 2025).

5. Hierarchical Bayesian Inference and Prior Reparameterization

i-nessai is integral to advanced hierarchical Bayesian frameworks where strong dependencies exist between physical and hyperparameters (e.g., PTA noise models). Prior sensitivity is mitigated by reparameterization via normalising flows, specifically through

  • Push-forward flows (PF-NF) on orthogonalized hyperparameters,
  • Conditional flows (PB-CNF) mapping latent variables to physical parameters conditioned on hyperparameters.

Sampling proceeds by drawing uniform variates in the hypercube and transforming via the learned flows, maintaining tractable Jacobians and proper probability densities. This framework allows for the decorrelation of priors from model parameters and supports complex multimodal posterior inference without introducing sampling bias (D'Amico et al., 5 Nov 2025).

Flow-based prior transforms and log-PDF evaluations are provided via the nflows library. In PTA implementations, the Enterprise framework delivers likelihood evaluation, and "dummy" wide priors maintain compatibility. Live-point sizes and flow architecture are chosen to balance sampling noise and flow training cost (e.g., 4000 live points for single-pulsar inference, mini-batched flow retraining on GPU in ~9 minutes for 20,000 prior draws) (D'Amico et al., 5 Nov 2025).

6. Stability, Diagnostics, and Practical Recommendations

Robust diagnostics confirm i-nessai reliability:

  • State-evolution plots trace minimum/median/maximum log-likelihood, likelihood thresholds, cumulative log-evidence, and cumulative likelihood calls.
  • Corner plots of internal parameter correlations (logL,logQ,logW\log L, \log Q, \log W) verify correct anti-correlations and weight corrections.
  • Run-to-run reproducibility tests show minimal empirical scatter in evidence and posteriors.

Practical tuning involves:

  • Setting N_live ≳ 10×d for problem dimension d,
  • Employing n_pool ≈ half the number of physical cores for likelihood evaluation,
  • Moderate flow size (e.g., 8 blocks × 4 layers × 96 neurons),
  • Monitoring diagnostics for smooth compression and calibrated uncertainties.

Automated hyperparameter search (Optuna) can be applied. For next-generation PTA analyses, GPU acceleration is recommended for flow training; likelihood evaluation in Enterprise remains CPU-bound.

7. Broader Impact and Applications

i-nessai provides an efficient, flexible platform for Bayesian evidence and posterior inference in complex scientific workflows. Its removal of bottlenecks (prior rejection and likelihood ordering), batch-and-import-sampling schema, and compatibility with hierarchical models directly impacts gravitational-wave parameter estimation and PTA analysis, allowing for tractable solutions in otherwise computationally prohibitive scenarios (Williams et al., 2023, Villa et al., 3 Nov 2025, D'Amico et al., 5 Nov 2025).

The mixture-meta-proposal and importance weighting open pathways to future developments integrating adaptive SMC, variational inference, and hierarchical chaining. The combination of unbiased evidence estimation, computational efficiency, and algorithmic simplicity positions i-nessai as a primary tool for high-dimensional Bayesian inference in astrophysics and beyond.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to i-nessai.