Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

No Free Lunch for Stochastic Gradient Langevin Dynamics (2412.01952v1)

Published 2 Dec 2024 in stat.CO

Abstract: As sample sizes grow, scalability has become a central concern in the development of Markov chain Monte Carlo (MCMC) methods. One general approach to this problem, exemplified by the popular stochastic gradient Langevin dynamics (SGLD) algorithm, is to use a small random subsample of the data at every time step. This paper, building on recent work such as \cite{nagapetyan2017true,JohndrowJamesE2020NFLf}, shows that this approach often fails: while decreasing the sample size increases the speed of each MCMC step, for typical datasets this is balanced by a matching decrease in accuracy. This result complements recent work such as \cite{nagapetyan2017true} (which came to the same conclusion, but analyzed only specific upper bounds on errors rather than actual errors) and \cite{JohndrowJamesE2020NFLf} (which did not analyze nonreversible algorithms and allowed for logarithmic improvements).

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com