Papers
Topics
Authors
Recent
2000 character limit reached

Randomized Midpoint Langevin Monte Carlo

Updated 19 November 2025
  • RLMC is a stochastic numerical integrator that uses randomized midpoints to reduce bias and achieve higher-order accuracy compared to Euler methods.
  • It leverages strong convexity and Lipschitz gradient conditions to ensure geometric ergodicity and optimal error bounds in Wasserstein-2 and KL metrics.
  • Variants like PRLMC and parallel RLMC further reduce discretization bias and boost computational efficiency in high-dimensional Bayesian inference.

Randomized Midpoint Langevin Monte Carlo (RLMC) is a class of stochastic numerical integrators for simulating Langevin diffusions to sample from high-dimensional distributions. Unlike classical Euler-based discretizations, RLMC achieves higher-order accuracy and improved computational complexity by randomizing the integration point within each time step. This scheme is particularly effective for strongly convex and log-concave targets, and recent analysis demonstrates its near-optimality in Wasserstein-2 and KL metrics under mild regularity assumptions.

1. Algorithm and Update Rule

At the core of RLMC lies a randomized midpoint update for discretizing the overdamped Langevin SDE:

dXt=U(Xt)dt+2dBtdX_t = -\nabla U(X_t) dt + \sqrt{2} dB_t

for potential U:RdRU: \mathbb{R}^d \rightarrow \mathbb{R}.

A single RLMC iteration proceeds as:

  • Draw ukUniform[0,1]u_k \sim \mathrm{Uniform}[0,1]
  • Draw independent Gaussian vectors ξk,ξkN(0,Id)\xi_k', \xi_k \sim \mathcal{N}(0, I_d)
  • Compute the midpoint:

Yk+uk=XkukηU(Xk)+2ukηξkY_{k+u_k} = X_k - u_k \eta \nabla U(X_k) + \sqrt{2u_k \eta} \xi_k'

  • Update:

Xk+1=XkηU(Yk+uk)+2ηξk+1X_{k+1} = X_k - \eta \nabla U(Y_{k+u_k}) + \sqrt{2\eta} \xi_{k+1}

This scheme is a two-gradient-call modification of Unadjusted Langevin Algorithm (ULA). The randomization in the drift evaluation yields a mean-zero local discretization error, eliminating the leading order bias and de-correlating local errors across steps (Li et al., 17 Nov 2025, Yu et al., 2023).

2. Mathematical Foundations and Regularity

To guarantee geometric ergodicity and optimal error bounds, RLMC requires the potential UU to be mm-strongly convex and LL-gradient Lipschitz:

mxy2U(x)U(y),xyLxy2x,y,m \|x-y\|^2 \le \langle \nabla U(x) - \nabla U(y), x-y \rangle \le L \|x-y\|^2 \quad \forall x,y,

implying mId2U(x)LIdm I_d \preceq \nabla^2 U(x) \preceq L I_d. Additional regularity, such as bounded third derivatives, may be needed for sharper results and decreasing-step analysis (Shen et al., 17 Nov 2025, Li et al., 17 Nov 2025).

For generalization beyond log-concavity, analysis depends on:

  • Dissipativity: x,U(x)μx2μd\langle x, \nabla U(x)\rangle \geq \mu |x|^2 - \mu' d
  • Gradient Lipschitzness
  • Log-Sobolev Inequality (LSI) for the target measure π\pi (Wang et al., 30 Sep 2025)

3. Convergence Rates and Error Bounds

Constant Step Size

With a fixed η\eta, RLMC defines a homogeneous Markov chain with unique invariant measure πη\pi_\eta and exponential convergence in weighted total variation:

dTV,V(νQηn,πη)C(1+ν(V))ecnd_{TV,V}(\nu Q_\eta^n, \pi_\eta) \leq C(1+\nu(V))e^{-cn}

where V(x)=1+x2V(x) = 1 + |x|^2 (Li et al., 17 Nov 2025).

The stationary bias relative to the true target π\pi satisfies:

W2(πη,π)=O(η)(overdamped)W_2(\pi_\eta, \pi) = O(\sqrt{\eta}) \quad \text{(overdamped)}

and can be sharpened to O(η)O(\eta) under third-derivative control (Shen et al., 17 Nov 2025).

Decreasing Step Size

With a nonincreasing sequence {γk}\{\gamma_k\}, RLMC achieves

dG(L(Ytn),π)C(1+x2)γnd_{\mathcal{G}}(\mathcal{L}(Y_{t_n}), \pi) \leq C(1+|x|^2)\gamma_n

in test-function metrics, and O(γn)O(\gamma_n) rates in W2W_2 for sufficiently smooth UU (Shen et al., 17 Nov 2025, Li et al., 17 Nov 2025).

KL and Total Variation Complexity

Applying Malliavin calculus and anticipative Girsanov arguments, RLMC achieves near-optimal query complexity for ε2\varepsilon^2-accurate KL error:

O~(κ5/4d1/4ε1/2)\widetilde{O}(\kappa^{5/4} d^{1/4} \varepsilon^{-1/2})

where κ=L/m\kappa = L/m is the condition number. This matches or surpasses the best previously known rates, breaking the O~(κ2dε2)\widetilde{O}(\kappa^2 d \varepsilon^{-2}) barrier of Euler-type methods (Zhang, 17 Jul 2025).

4. Randomized Midpoint and Poisson Variants

Randomized Midpoint

RLMC uses a single uniform random point per step to evaluate the drift, producing higher weak order accuracy (bias O(h2)O(h^2), variance O(h3)O(h^3)) than deterministic midpoint or Euler methods (Shen et al., 2019, Yu et al., 2023):

Poisson Randomized Midpoint LMC (PRLMC)

PRLMC introduces a Poisson-distributed number of random midpoints (Bernoulli or uniform selection over KK midpoints per interval) to further debias the integrated drift. As KK \to \infty, PRLMC approaches a true Poisson randomization, producing unbiased step corrections and potentially lower discretization bias (Shen et al., 17 Nov 2025, Kandasamy et al., 27 May 2024).

Parallelization

Splitting intervals into RR subintervals, parallelized RLMC (pRLMC) aggregates RR independent midpoint tasks per iteration. This architecture enables significant wall-clock speedup without impacting convergence rates, particularly valuable in high-dimensional settings (Yu et al., 22 Feb 2024).

5. Nonasymptotic Analysis and Practical Implications

Recent advances established tight nonasymptotic bounds for RLMC and PRLMC in various metrics under minimal smoothness:

6. Comparison with Other Langevin Integrators

Method Complexity (KL/W2W_2) Bias Order
ULA/Euler O~(κ2dε2)\widetilde{O}(\kappa^2 d \varepsilon^{-2}) O(h)O(h)
RLMC O~(κ5/4d1/4ε1/2)\widetilde{O}(\kappa^{5/4} d^{1/4} \varepsilon^{-1/2}) O(h1/2)O(h^{1/2}) (overdamped)
PRLMC (Poisson) O~(κlog(1/ε))\widetilde{O}(\kappa \log(1/\varepsilon)) (W2_2) O(h3/2)O(h^{3/2}) (underdamped)
Verlet+Midpoint O~(κ3/2ϵ2/3)\widetilde{O}(\kappa^{3/2} \epsilon^{-2/3}) O(h3/2)O(h^{3/2})
Parallel RLMC O(κlog(1/ε))O(\kappa \log(1/\varepsilon)) wall-clock time O(h1/2)O(h^{1/2})

RLMC and its Poisson and parallel variants offer quantifiable improvements in strong and weak error rates, dimensional scaling, and practical runtime, especially compared to Euler–Maruyama and deterministic midpoint schemes (Zhang, 17 Jul 2025, He et al., 2020, Yu et al., 22 Feb 2024).

7. Open Directions and Extensions

Ongoing research explores RLMC under nonconvexity, with dissipativity and LSI replacing log-concavity (Wang et al., 30 Sep 2025), extension to manifold Langevin sampling, and analysis in statistical inference tasks where ergodicity and higher-order bias impact sample quality. Further generalizations involve tamed and projected variants for unbounded drifts, and double-midpoint constructions for kinetic Langevin dynamics requiring third-order regularity.

Recent results point towards RLMC as an optimal integrator for a range of stochastic sampling problems, with minimal assumptions and strong complexity guarantees, bridging theoretical advances with empirical acceleration in applications to high-dimensional Bayesian inference and generative modeling (Shen et al., 17 Nov 2025, Kandasamy et al., 27 May 2024).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Randomized Midpoint Langevin Monte Carlo (RLMC).