Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-asymptotic analysis of Langevin-type Monte Carlo algorithms (2303.12407v5)

Published 22 Mar 2023 in math.ST, math.PR, stat.ML, and stat.TH

Abstract: We study Langevin-type algorithms for sampling from Gibbs distributions such that the potentials are dissipative and their weak gradients have finite moduli of continuity not necessarily convergent to zero. Our main result is a non-asymptotic upper bound of the 2-Wasserstein distance between a Gibbs distribution and the law of general Langevin-type algorithms based on the Liptser--Shiryaev theory and Poincar\'{e} inequalities. We apply this bound to show that the Langevin Monte Carlo algorithm can approximate Gibbs distributions with arbitrary accuracy if the potentials are dissipative and their gradients are uniformly continuous. We also propose Langevin-type algorithms with spherical smoothing for distributions whose potentials are not convex or continuously differentiable.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Anastassiou, G. A. (2009). Distributional Taylor formula. Nonlinear Analysis: Theory, Methods & Applications, 70(9):3195–3202.
  2. Anderson, C. R. (2014). Compact polynomial mollifiers for Poisson’s equation. Technical report, Department of Mathematics, UCLA, Los Angeles, California.
  3. A simple proof of the Poincaré inequality for a large class of probability measures. Electronic Communications in Probability, 13:60–66.
  4. Analysis and Geometry of Markov Diffusion Operators. Springer.
  5. Functional inequalities for Gaussian convolutions of compactly supported measures: explicit bounds and dimension dependence. Bernoulli, 24(1):333–353.
  6. Weighted Csiszár-Kullback-Pinsker inequalities and applications to transportation inequalities. Annales de la Faculté des sciences de Toulouse: Mathématiques, 14(3):331–352.
  7. The tamed unadjusted Langevin algorithm. Stochastic Processes and their Applications, 129(10):3638–3663.
  8. A note on Talagrand’s transportation inequality and logarithmic Sobolev inequality. Probability Theory and Related Fields, 148:285–304.
  9. Langevin Monte Carlo without smoothness. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, pages 1716–1726.
  10. Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev. In Proceedings of Thirty Fifth Conference on Learning Theory, pages 1–2.
  11. Dalalyan, A. S. (2017). Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):651–676.
  12. Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. The Annals of Applied Probability, 27(3):1551 – 1587.
  13. High-dimensional Bayesian inference via the unadjusted Langevin algorithm. Bernoulli, 25(4A):2854–2882.
  14. On the convergence of Langevin Monte Carlo: The interplay between tail growth and smoothness. In Proceedings of Thirty Fourth Conference on Learning Theory, pages 1776–1822.
  15. Global non-convex optimization with discretized diffusions. 32nd Conference on Neural Information Processing Systems.
  16. Lehec, J. (2023). The Langevin Monte Carlo algorithm in the non-smooth log-concave case. The Annals of Applied Probability, 33(6A):4858–4874.
  17. Analysis. American Mathematical Society, 2nd edition.
  18. Statistics of Random Processes: I. General theory. Springer, 2nd edition.
  19. Liu, Y. (2020). The poincaré inequality and quadratic transportation-variance inequalities. Electronic Journal of Probability, 25(1):1–16.
  20. Density and gradient estimates for non degenerate Brownian SDEs with unbounded measurable drift. Journal of Differential Equations, 272:330–369.
  21. Heat kernel of supercritical nonlocal operators with unbounded drifts. Journal de l’École polytechnique—Mathématiques, 9:537–579.
  22. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17:527–566.
  23. Nguyen, D. (2022). Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials. Brazilian Journal of Probability and Statistics, 36(3):504–539.
  24. Pereyra, M. (2016). Proximal Markov chain Monte Carlo algorithms. Statistics and Computing, 26:745–760.
  25. Non-convex learning via stochastic gradient Langevin dynamics: a nonasymptotic analysis. In Proceedings of the 2017 Conference on Learning Theory, pages 1674–1703.
  26. Stochastic zeroth-order discretizations of langevin diffusions for bayesian inference. Bernoulli, 28(3):1810–1834.
  27. Global convergence of Langevin dynamics based algorithms for nonconvex optimization. 32nd Conference on Neural Information Processing Systems.
  28. Nonasymptotic estimates for Stochastic Gradient Langevin Dynamics under local conditions in nonconvex optimization. Applied Mathematics & Optimization, 87(2):25.

Summary

We haven't generated a summary for this paper yet.