Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 42 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 217 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Machine learning assisted Bayesian model comparison: learnt harmonic mean estimator (2111.12720v3)

Published 24 Nov 2021 in stat.ME, astro-ph.IM, and stat.CO

Abstract: We resurrect the infamous harmonic mean estimator for computing the marginal likelihood (Bayesian evidence) and solve its problematic large variance. The marginal likelihood is a key component of Bayesian model selection to evaluate model posterior probabilities; however, its computation is challenging. The original harmonic mean estimator, first proposed by Newton and Raftery in 1994, involves computing the harmonic mean of the likelihood given samples from the posterior. It was immediately realised that the original estimator can fail catastrophically since its variance can become very large (possibly not finite). A number of variants of the harmonic mean estimator have been proposed to address this issue although none have proven fully satisfactory. We present the \emph{learnt harmonic mean estimator}, a variant of the original estimator that solves its large variance problem. This is achieved by interpreting the harmonic mean estimator as importance sampling and introducing a new target distribution. The new target distribution is learned to approximate the optimal but inaccessible target, while minimising the variance of the resulting estimator. Since the estimator requires samples of the posterior only, it is agnostic to the sampling strategy used. We validate the estimator on a variety of numerical experiments, including a number of pathological examples where the original harmonic mean estimator fails catastrophically. We also consider a cosmological application, where our approach leads to $\sim$ 3 to 6 times more samples than current state-of-the-art techniques in 1/3 of the time. In all cases our learnt harmonic mean estimator is shown to be highly accurate. The estimator is computationally scalable and can be applied to problems of dimension $O(103)$ and beyond. Code implementing the learnt harmonic mean estimator is made publicly available

Citations (12)

Summary

  • The paper introduces a learnt harmonic mean estimator that minimizes variance when computing marginal likelihoods in Bayesian model selection.
  • It reframes the estimation as an importance sampling problem by leveraging machine learning to learn an optimal sampling density.
  • Empirical tests on benchmark problems show improved accuracy and efficiency compared to traditional nested sampling methods.

Machine Learning Assisted Bayesian Model Comparison: Learnt Harmonic Mean Estimator

The paper "Machine learning assisted Bayesian model comparison: learnt harmonic mean estimator" by McEwen et al. is centered around addressing the long-standing issues associated with the harmonic mean estimator in Bayesian model selection. The harmonic mean estimator, originally introduced by Newton and Raftery in 1994, quickly became infamous due to its large and often unmanageable variance when estimating the marginal likelihood, also known as Bayesian evidence. The work presents a novel variant called the learnt harmonic mean estimator, which effectively overcomes these challenges through machine learning techniques.

Summary of Contributions

The principal contribution of this work is the introduction of a learnt harmonic mean estimator as a robust method to calculate the marginal likelihood required for Bayesian model comparison. The original harmonic mean estimator was highly sensitive due to its dependence on posteriors with heavy tails, resulting in estimation instability. This paper reframes it as an importance sampling problem, re-targeting the sampling density and introducing a learning phase to approximate the optimal sampling density.

By employing machine learning to learn along an appropriate target distribution, the authors ensure that the variance of the estimator is minimized. They leverage training and evaluation sets derived from posterior samples to optimize a model fitting the posterior distribution, with constraints to ensure tail probabilities are appropriately managed. Several models, including modified Gaussian mixture models and Kernel density estimation, are considered for approximating the target distribution.

Key Results and Validation

The empirical validation of the proposed estimator is conducted through a series of benchmark problems known to be problematic for the original harmonic mean estimator. Notably, this includes the Rosenbrock and Rastrigin functions, the Normal-Gamma model, logistic regression models for the Pima Indians dataset, and the non-nested linear regression models for the Radiata pine data. These experiments illustrate that the learnt harmonic mean estimator is not only accurate but also robust across a diverse set of problematic high-dimensional scenarios. For instance, the paper demonstrates a significant improvement in accuracy for Bayes factor computation compared to previous harmonic mean estimates in complex cosmological models. The estimator is shown to compute marginal likelihoods and Bayes factors consistent with those obtained by nested sampling methods, but more efficiently in terms of both computational time and required samples.

Implications and Future Prospects

The implications of these findings are notable for Bayesian inference and broader scientific domains employing model comparisons. The decoupling of the sampling strategy from the marginal likelihood computation offered by the learnt harmonic mean estimator is particularly advantageous, allowing the use of posterior samples produced via any advanced sampling technique without necessitating changes to the estimator itself. This flexibility permits rapid adaptability across varying domains with complex model requirements, simplifying implementations for simulation-based inferences.

Future directions may explore the extension of the method to even higher dimensions and more intricate models, strengthening the estimator’s applicability. Additionally, there is potential for improved training strategies within the machine learning component to further reduce estimation variance. Exploration into alternative models for the target distribution can also enhance the estimator's performance in models exhibiting complex posterior landscapes with intricate degeneracies.

Overall, the learnt harmonic mean estimator represents a significant advancement in Bayesian model selection, providing an efficient and reliable method for estimating the marginal likelihood in a way that is scalable to practical applications in high-dimensional settings. The publicly available harmonic software package ensures reproducibility and encourages further development and adaptation in various research endeavors.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube