Population Monte Carlo with Normalizing Flow (2312.03857v2)
Abstract: Adaptive importance sampling (AIS) methods provide a useful alternative to Markov Chain Monte Carlo (MCMC) algorithms for performing inference of intractable distributions. Population Monte Carlo (PMC) algorithms constitute a family of AIS approaches which adapt the proposal distributions iteratively to improve the approximation of the target distribution. Recent work in this area primarily focuses on ameliorating the proposal adaptation procedure for high-dimensional applications. However, most of the AIS algorithms use simple proposal distributions for sampling, which might be inadequate in exploring target distributions with intricate geometries. In this work, we construct expressive proposal distributions in the AIS framework using normalizing flow, an appealing approach for modeling complex distributions. We use an iterative parameter update rule to enhance the approximation of the target distribution. Numerical experiments show that in high-dimensional settings, the proposed algorithm offers significantly improved performance compared to the existing techniques.
- C. P. Robert and G. Casella, Monte Carlo Statistical Methods, Springer, 2004.
- J. S. Liu, Monte Carlo Strategies in Scientific Computing, Springer, 2004.
- J. V. Candy, Bayesian signal processing: classical, modern, and particle filtering methods, vol. 54, John Wiley & Sons, 2016.
- Markov Chain Monte Carlo in Practice, Taylor & Francis, 1995.
- R. M Neal, “MCMC using Hamiltonian dynamics,” in Handbook of Markov Chain Monte Carlo, S. Brooks, A. Gelman, G. L. Jones, and X. Meng, Eds., chapter 5, pp. 113–162. Chapman and Hall/CRC, Boca Raton, USA, 2011.
- M. Girolami and B. Calderhead, “Riemann manifold Langevin and Hamiltonian Monte Carlo methods,” J. Royal Statist. Society, vol. 73, pp. 123 – 214, Mar. 2011.
- “Examples of adaptive MCMC,” J. Comput. and Graphical Statist., vol. 18, no. 2, pp. 349–367, 2009.
- “Adaptive MCMC with Bayesian optimization,” in Proc. Int. Conf. Artificial Intell. and Statist., La Palma, Canary Islands, Apr. 2012, vol. 22, pp. 751–760.
- “Generalized multiple importance sampling,” Statist. Science, vol. 34, Nov. 2015.
- “Adaptive importance sampling: the past, the present, and the future,” IEEE Signal Process. Magazine, vol. 34, no. 4, pp. 60–79, Jul. 2017.
- “Population Monte Carlo,” J. Computat. and Graphical Statist., vol. 13, no. 4, pp. 907–929, 2004.
- “Population Monte Carlo schemes with reduced path degeneracy,” in IEEE Int. Workshop on Computat. Adv. in Multi-Sensor Adaptive Process., Curacao, Dutch Antilles, Dec. 2017, pp. 1–5.
- V. Elvira and É. Chouzenoux, “Langevin-based strategy for efficient proposal adaptation in population Monte Carlo,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., Brighton, UK, May 2019, pp. 5077–5081.
- “A gradient adaptive population importance sampler,” in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., Brisbane, Australia, Apr. 2015, pp. 4075–4079.
- “A variational adaptive population importance sampler,” in IEEE Int. Conf. on Acoust., Speech and Signal Process., Brighton, UK, May 2019, pp. 5052–5056.
- “Improving population monte carlo: alternative weighting and resampling schemes,” Signal Process., vol. 131, pp. 77–91, 2017.
- V. Elvira and É. Chouzenoux, “Optimized population Monte Carlo,” IEEE Trans. Signal Process., vol. 70, pp. 2489–2501, May 2022.
- “Gradient-based adaptive importance samplers,” J. Franklin Inst., vol. 360, no. 13, pp. 9490–9514, 2023.
- “Adaptive importance sampling in general mixture classes,” Statist. Comput., vol. 18, pp. 447–459, 2008.
- “Layered adaptive importance sampling,” Statist. Comput., vol. 27, no. 3, pp. 599–623, 2017.
- “Hamiltonian adaptive importance sampling,” IEEE Signal Process. Letters, vol. 28, pp. 713–717, Mar. 2021.
- “MCMC‐driven importance samplers,” Appl. Math. Model., vol. 111, pp. 310–331, 2022.
- “Adaptive multiple importance sampling,” Scandinavian J. Statist., vol. 39, no. 4, pp. 798–812, Dec. 2012.
- D. J. Rezende and S. Mohamed, “Variational inference with normalizing flows,” in Pro. Int. Conf. Machine Learning, Jul. 2015, p. 1530–1538.
- “Density estimation using real NVP,” in Proc. Int. Conf. Learning Representations, Toulon, France, Apr. 2017.
- “Masked autoregressive flow for density estimation,” in Proc. Adv. Neural Info. Process. Syst., Long Beach, CA, USA, Dec. 2017, pp. 2338–2347.
- “Improved variational inference with inverse autoregressive flow,” in Proc. Adv. Neural Info. Process. Syst., Barcelona, Spain, Dec. 2016, pp. 4736–4744.
- “MADE: Masked autoencoder for distribution estimation,” in Proc. Int. Conf. Machine Learning, Lille, France, Jul. 2015, pp. 881–889.
- “Conditional density estimation with Bayesian normalising flows,” in Proc. Workshop on Bayesian Deep Learning, Adv. Neural Info. Process. Syst., Long Beach, CA, USA, Dec. 2017.
- “Learning likelihoods with conditional normalizing flows,” ArXiv e-prints: arXiv 1912.00042, Nov. 2019.
- Y. Lu and B. Huang, “Structured output learning with conditional generative flows,” in Proc. AAAI Conf. Artificial Intell., New York, NY, USA, Feb. 2020, pp. 5005–5012.
- “Flow annealed importance sampling bootstrap,” in Proc. Int. Conf. Learning Representations, Kigali, Rwanda, May 2023.
- “Transport map accelerated Markov Chain Monte Carlo,” SIAM/ASA J. Uncertainty Quantification, vol. 6, no. 2, pp. 645–682, 2018.
- “Annealed flow transport Monte Carlo,” in Proc. Int. Conf. Machine Learning, Virtual, Jul. 2021.
- “Adaptive Monte Carlo augmented with normalizing flows,” Proc. National Academy of Sciences, vol. 119, no. 10, pp. e2109420119, 2022.
- “Deterministic Langevin Monte Carlo with normalizing flows for Bayesian inference,” in Proc. Adv. Neural Info. Process. Syst., New Orleans, LA, USA, Dec. 2022.
- G. O. Roberts and O. Stramer, “Langevin diffusions and Metropolis-Hastings algorithms,” Methodology and Comput. Appl. Probab., vol. 4, no. 4, pp. 337–357, Dec 2002.
- “Hybrid Monte Carlo,” Phys. Lett. B, vol. 195, no. 2, pp. 216 – 222, 1987.
- “Stochastic variational inference,” J. Machine Learning Research, vol. 14, pp. 1303–1347, 2013.
- X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proc. Int. Conf. Artificial Intell. and Statist., Sardinia, Italy, May 2010, pp. 249–256.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.