Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Theoretical guarantees for lifted samplers (2405.15952v1)

Published 24 May 2024 in stat.CO, math.ST, and stat.TH

Abstract: Lifted samplers form a class of Markov chain Monte Carlo methods which has drawn a lot attention in recent years due to superior performance in challenging Bayesian applications. A canonical example of such sampler is the one that is derived from a random walk Metropolis algorithm for a totally-ordered state space such as the integers or the real numbers. The lifted sampler is derived by splitting into two the proposal distribution: one part in the increasing direction, and the other part in the decreasing direction. It keeps following a direction, until a rejection, upon which it flips the direction. In terms of asymptotic variances, it outperforms the random walk Metropolis algorithm, regardless of the target distribution, at no additional computational cost. Other studies show, however, that beyond this simple case, lifted samplers do not always outperform their Metropolis counterparts. In this paper, we leverage the celebrated work of Tierney (1998) to provide an analysis in a general framework encompassing a broad class of lifted samplers. Our finding is that, essentially, the asymptotic variances cannot increase by a factor of more than 2, regardless of the target distribution, the way the directions are induced, and the type of algorithm from which the lifted sampler is derived (be it a Metropolis--Hastings algorithm, a reversible jump algorithm, etc.). This result indicates that, while there is potentially a lot to gain from lifting a sampler, there is not much to lose.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. arXiv:2012.14881.
  2. Bernoulli, 24, 842–872.
  3. Ann. Statist., 49, 1958 – 1981.
  4. Barker, A. A. (1965) Monte Carlo calculations of the radial distribution functions for a proton-electron plasma. Austral. J. Phys., 18, 119–134.
  5. Beaumont, M. A. (2003) Estimation of population growth or decline in genetically monitored populations. Genetics, 164, 1139–1160.
  6. In Proceedings of the thirty-first annual ACM symposium on Theory of computing, 275–281.
  7. Ann. Appl. Probab., 726–752.
  8. J. Comput. Graph. Statist., 30, 312–323. ArXiv:1911.01340.
  9. Bernoulli, 30, 2301 – 2325.
  10. Green, P. J. (1995) Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82, 711–732.
  11. Gustafson, P. (1998) A guided walk Metropolis algorithm. Stat. Comput., 8, 357–364.
  12. Hastings, W. K. (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.
  13. Electron. Commun. Probab., 12, 454 – 464.
  14. Horowitz, A. M. (1991) A generalized guided Monte Carlo algorithm. Phys. Lett. B, 268, 247–252.
  15. J. Amer. Statist. Assoc., 95, 121–134.
  16. J. R. Stat. Soc. Ser. B. Stat. Methodol., 84, 496–523.
  17. J. Chem. Phys., 21, 1087.
  18. Peskun, P. (1973) Optimum Monte-Carlo sampling using Markov chains. Biometrika, 60, 607–612.
  19. Probab. Surv., 1, 20–71.
  20. Phys. Rev. E, 93, 043318.
  21. — (2016b) Irreversible simulated tempering. J. Phys. Soc. Jpn., 85, 104002.
  22. J. R. Stat. Soc. Ser. B. Stat. Methodol., 84, 321–350.
  23. Tierney, L. (1998) A note on Metropolis-Hastings kernels for general state spaces. Ann. Appl. Probab., 8, 1–9.
  24. Vucelja, M. (2016) Lifting–a nonreversible Markov chain Monte Carlo algorithm. Amer. J. Phys., 84, 958–968.
  25. Zanella, G. (2020) Informed proposals for local MCMC in discrete spaces. J. Amer. Statist. Assoc., 115, 852–865.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com