Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case (1812.02709v3)

Published 6 Dec 2018 in math.ST, math.PR, stat.ML, and stat.TH

Abstract: We study the problem of sampling from a probability distribution $\pi$ on $\rsetd$ which has a density \wrt\ the Lebesgue measure known up to a normalization factor $x \mapsto \rme{-U(x)} / \int_{\rsetd} \rme{-U(y)} \rmd y$. We analyze a sampling method based on the Euler discretization of the Langevin stochastic differential equations under the assumptions that the potential $U$ is continuously differentiable, $\nabla U$ is Lipschitz, and $U$ is strongly concave. We focus on the case where the gradient of the log-density cannot be directly computed but unbiased estimates of the gradient from possibly dependent observations are available. This setting can be seen as a combination of a stochastic approximation (here stochastic gradient) type algorithms with discretized Langevin dynamics. We obtain an upper bound of the Wasserstein-2 distance between the law of the iterates of this algorithm and the target distribution $\pi$ with constants depending explicitly on the Lipschitz and strong convexity constants of the potential and the dimension of the space. Finally, under weaker assumptions on $U$ and its gradient but in the presence of independent observations, we obtain analogous results in Wasserstein-2 distance.

Citations (35)

Summary

We haven't generated a summary for this paper yet.