Mean square error analysis of stochastic gradient and variance-reduced sampling algorithms (2511.04413v1)
Abstract: This paper considers mean square error (MSE) analysis for stochastic gradient sampling algorithms applied to underdamped Langevin dynamics under a global convexity assumption. A novel discrete Poisson equation framework is developed to bound the time-averaged sampling error. For the Stochastic Gradient UBU (SG-UBU) sampler, we derive an explicit MSE bound and establish that the numerical bias exhibits first-order convergence with respect to the step size $h$, with the leading error coefficient proportional to the variance of the stochastic gradient. The analysis is further extended to variance-reduced algorithms for finite-sum potentials, specifically the SVRG-UBU and SAGA-UBU methods. For these algorithms, we identify a phase transition phenomenon whereby the convergence rate of the numerical bias shifts from first to second order as the step size decreases below a critical threshold. Theoretical findings are validated by numerical experiments. In addition, the analysis provides a practical empirical criterion for selecting between the mini-batch SG-UBU and SVRG-UBU samplers to achieve optimal computational efficiency.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.