Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fast Sampling and Inference via Preconditioned Langevin Dynamics

Published 11 Oct 2023 in stat.CO | (2310.07542v2)

Abstract: Sampling from distributions play a crucial role in aiding practitioners with statistical inference. However, in numerous situations, obtaining exact samples from complex distributions is infeasible. Consequently, researchers often turn to approximate sampling techniques to address this challenge. Fast approximate sampling from complicated distributions has gained much traction in the last few years with considerable progress in this field. Previous work has shown that for some problems a preconditioning can make the algorithm faster. In our research, we explore the Langevin Monte Carlo (LMC) algorithm and demonstrate its effectiveness in enabling inference from the obtained samples. Additionally, we establish a convergence rate for the LMC Markov chain in total variation. Lastly, we derive non-asymptotic bounds for approximate sampling from specific target distributions in the Wasserstein distance, particularly when the preconditioning is spatially invariant.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Weighted Csiszár-Kullback-Pinsker inequalities and applications to transportation inequalities. In Annales de la Faculté des sciences de Toulouse: Mathématiques, volume 14, 2005.
  2. Optimization methods for large-scale machine learning. SIAM Review, 60(2), 2018.
  3. Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo. In Conference on Learning Theory. PMLR, 2017.
  4. Theory and algorithms for diffusion processes on riemannian manifolds. arXiv preprint arXiv:2204.13665, 2022.
  5. Arnak S. Dalalyan. Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent. In Conference on Learning Theory. PMLR, 2017.
  6. Arnak S. Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3), 2017.
  7. On sampling from a log-concave density using kinetic Langevin diffusions. Bernoulli, 26(3), 2020.
  8. Central limit theorems for stochastic gradient descent with averaging for stable manifolds. Electronic Journal of Probability, 28, 2023.
  9. Sampling from strongly log-concave distributions with the Unadjusted Langevin Algorithm. arXiv preprint arXiv:1605.01559, 2016.
  10. High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm. Bernoulli, 25(4A), 2019.
  11. Convergence of the Riemannian Langevin Algorithm, 2022, https://arxiv.org/abs/2204.10818.
  12. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 73(2), 2011.
  13. Statistically preconditioned accelerated gradient method for distributed optimization. In International Conference on Machine Learning. PMLR, 2020.
  14. Xi-Lin Li. Preconditioned stochastic gradient descent. IEEE Transactions on Neural Networks and Learning Systems, 29(5), 2017.
  15. Statistics of Random Processes I: General Theory. Stochastic Modelling and Applied Probability. Springer New York, 2013.
  16. A complete recipe for stochastic gradient mcmc. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
  17. Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise. Stochastic Processes and their Applications, 101(2), 2002.
  18. Markov chains and stochastic stability. Springer Science & Business Media, 2012.
  19. Bernt Oksendal. Stochastic Differential Equations: an Introduction with Applications. Springer Science & Business Media, 2013.
  20. A stochastic approximation method. The Annals of Mathematical Statistics, 1951.
  21. Langevin diffusions and Metropolis-Hastings algorithms. Methodology and Computing in Applied Probability, 4(4), 2002.
  22. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 1996.
  23. Jeffrey S. Rosenthal. Minorization conditions and convergence rates for Markov chain Monte Carlo. Journal of the American Statistical Association, 90(430), 1995.
  24. Convergence of position-dependent mala with application to conditional simulation in glmms. Journal of Computational and Graphical Statistics, 32(2):501–512, 2023.
  25. A Tutorial on Thompson Sampling. Foundations and Trends® in Machine Learning, 11(1), 2018.
  26. William F. Trench. Asymptotic distribution of the spectra of a class of generalized Kac–Murdock–Szegö matrices. Linear Algebra and its Applications, 294(1-3), 1999.
  27. Bayesian sparse learning with preconditioned stochastic gradient MCMC and its applications. Journal of Computational Physics, 432, 2021.
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.