Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 80 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 117 tok/s Pro
Kimi K2 176 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

A Proximal Algorithm for Sampling (2202.13975v3)

Published 28 Feb 2022 in cs.LG and stat.ML

Abstract: We study sampling problems associated with potentials that lack smoothness. The potentials can be either convex or non-convex. Departing from the standard smooth setting, the potentials are only assumed to be weakly smooth or non-smooth, or the summation of multiple such functions. We develop a sampling algorithm that resembles proximal algorithms in optimization for this challenging sampling task. Our algorithm is based on a special case of Gibbs sampling known as the alternating sampling framework (ASF). The key contribution of this work is a practical realization of the ASF based on rejection sampling for both non-convex and convex potentials that are not necessarily smooth. In almost all the cases of sampling considered in this work, our proximal sampling algorithm achieves better complexity than all existing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin monte carlo. In Conference on Learning Theory, pp.  2896–2923. PMLR, 2022.
  2. Espen Bernton. Langevin Monte Carlo and JKO splitting. In Conference On Learning Theory, pp.  1777–1798. PMLR, 2018.
  3. Nonasymptotic mixing of the MALA algorithm. IMA Journal of Numerical Analysis, 33(1):80–110, 2013.
  4. Langevin Monte Carlo without smoothness. In International Conference on Artificial Intelligence and Statistics, pp.  1716–1726. PMLR, 2020.
  5. Improved analysis for a proximal algorithm for sampling. In Conference on Learning Theory, pp.  2984–3014. PMLR, 2022.
  6. Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev. arXiv preprint arXiv:2112.12662, 2021.
  7. A sampling problem in molecular dynamics simulations of macromolecules. Proceedings of the National Academy of Sciences, 92(8):3288–3292, 1995.
  8. Arnak S Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):651–676, 2017.
  9. On sampling from a log-concave density using kinetic Langevin diffusions. Bernoulli, 26(3):1956–1988, 2020.
  10. Bounding the error of discretized langevin algorithms for non-strongly log-concave targets. Journal of Machine Learning Research, 23(235):1–38, 2022.
  11. First-order methods of smooth convex optimization with inexact oracle. Mathematical Programming, 146(1):37–75, 2014.
  12. Efficient Bayesian computation by proximal Markov Chain Monte Carlo: when Langevin meets Moreau. SIAM Journal on Imaging Sciences, 11(1):473–506, 2018.
  13. Analysis of Langevin Monte Carlo via convex optimization. The Journal of Machine Learning Research, 20(1):2666–2711, 2019.
  14. On the convergence of Langevin Monte Carlo: the interplay between tail growth and smoothness. In Conference on Learning Theory, pp.  1776–1822. PMLR, 2021.
  15. When is the convergence time of langevin algorithms dimension independent? a composite optimization viewpoint. Journal of Machine Learning Research, 23(214):1–32, 2022.
  16. Bayesian Data Analysis. CRC press, 2013.
  17. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence, (6):721–741, 1984.
  18. Representations of knowledge in complex systems. Journal of the Royal Statistical Society: Series B (Methodological), 56(4):549–581, 1994.
  19. What are bayesian neural network posteriors really like? In International Conference on Machine Learning, pp. 4629–4640. PMLR, 2021.
  20. Probabilistic graphical models: principles and techniques. MIT press, 2009.
  21. Igor Kononenko. Bayesian neural networks. Biological Cybernetics, 61(5):361–370, 1989.
  22. Werner Krauth. Statistical mechanics: algorithms and computations, volume 13. OUP Oxford, 2006.
  23. Structured logconcave sampling with a restricted Gaussian oracle. In Conference on Learning Theory, pp.  2993–3050. PMLR, 2021.
  24. Joseph Lehec. The langevin monte carlo algorithm in the non-smooth log-concave case. Available on arXiv:2101.10695, 2021.
  25. A proximal algorithm for sampling from non-smooth potentials. In 2022 Winter Simulation Conference (WSC), pp.  3229–3240, 2022.
  26. Sampling from non-smooth distributions through Langevin diffusion. Methodology and Computing in Applied Probability, 23(4):1173–1201, 2021.
  27. Principles and overview of sampling methods for modeling macromolecular structure and dynamics. PLoS computational biology, 12(4):e1004619, 2016.
  28. An efficient sampling algorithm for non-smooth composite potentials. Journal of Machine Learning Research, 23(233):1–50, 2022a.
  29. Improved bounds for discretization of langevin diffusions: Near-optimal rates without convexity. Bernoulli, 28(3):1577–1601, 2022b.
  30. Radford M Neal. MCMC using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2(11):2, 2011.
  31. Yu Nesterov. Universal gradient methods for convex optimization problems. Mathematical Programming, 152(1):381–404, 2015.
  32. Unadjusted Langevin algorithm for non-convex weakly smooth potentials. arXiv preprint arXiv:2101.06369, 2021.
  33. Proximal algorithms. Foundations and Trends in optimization, 1(3):127–239, 2014.
  34. Giorgio Parisi. Correlation functions and computer simulations. Nuclear Physics B, 180(3):378–384, 1981.
  35. Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis. In Conference on Learning Theory, pp.  1674–1703. PMLR, 2017.
  36. Alfréd Rényi. On measures of entropy and information. In Proceedings of the fourth Berkeley symposium on mathematical statistics and probability, volume 1. Berkeley, California, USA, 1961.
  37. Langevin diffusions and Metropolis-Hastings algorithms. Methodology and Computing in Applied Probability, 4(4):337–357, 2002.
  38. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, pp.  341–363, 1996.
  39. Primal dual interpretation of the proximal stochastic gradient Langevin algorithm. Advances in Neural Information Processing Systems, 33:3786–3796, 2020.
  40. Composite logconcave sampling with a Restricted Gaussian Oracle. Available on arXiv:2006.05976, 2020.
  41. Delimiting species: a renaissance issue in systematic biology. Trends in Ecology & Evolution, 18(9):462–470, 2003.
  42. Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices. Advances in neural information processing systems, 32, 2019.
  43. Cédric Villani. Topics in optimal transportation, volume 58. American Mathematical Soc., 2021.
  44. Andre Wibisono. Sampling as optimization in the space of measures: The langevin dynamics as a composite optimization problem. In Conference on Learning Theory, pp.  2093–3027. PMLR, 2018.
  45. Andre Wibisono. Proximal langevin algorithm: Rapid convergence under isoperimetry. arXiv preprint arXiv:1911.01469, 2019.
Citations (13)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.