Randomized Midpoint Langevin Monte Carlo
- RLMC is a stochastic numerical integrator that uses randomized midpoints to reduce bias and achieve higher-order accuracy compared to Euler methods.
- It leverages strong convexity and Lipschitz gradient conditions to ensure geometric ergodicity and optimal error bounds in Wasserstein-2 and KL metrics.
- Variants like PRLMC and parallel RLMC further reduce discretization bias and boost computational efficiency in high-dimensional Bayesian inference.
Randomized Midpoint Langevin Monte Carlo (RLMC) is a class of stochastic numerical integrators for simulating Langevin diffusions to sample from high-dimensional distributions. Unlike classical Euler-based discretizations, RLMC achieves higher-order accuracy and improved computational complexity by randomizing the integration point within each time step. This scheme is particularly effective for strongly convex and log-concave targets, and recent analysis demonstrates its near-optimality in Wasserstein-2 and KL metrics under mild regularity assumptions.
1. Algorithm and Update Rule
At the core of RLMC lies a randomized midpoint update for discretizing the overdamped Langevin SDE:
for potential .
A single RLMC iteration proceeds as:
- Draw
- Draw independent Gaussian vectors
- Compute the midpoint:
- Update:
This scheme is a two-gradient-call modification of Unadjusted Langevin Algorithm (ULA). The randomization in the drift evaluation yields a mean-zero local discretization error, eliminating the leading order bias and de-correlating local errors across steps (Li et al., 17 Nov 2025, Yu et al., 2023).
2. Mathematical Foundations and Regularity
To guarantee geometric ergodicity and optimal error bounds, RLMC requires the potential to be -strongly convex and -gradient Lipschitz:
implying . Additional regularity, such as bounded third derivatives, may be needed for sharper results and decreasing-step analysis (Shen et al., 17 Nov 2025, Li et al., 17 Nov 2025).
For generalization beyond log-concavity, analysis depends on:
- Dissipativity:
- Gradient Lipschitzness
- Log-Sobolev Inequality (LSI) for the target measure (Wang et al., 30 Sep 2025)
3. Convergence Rates and Error Bounds
Constant Step Size
With a fixed , RLMC defines a homogeneous Markov chain with unique invariant measure and exponential convergence in weighted total variation:
where (Li et al., 17 Nov 2025).
The stationary bias relative to the true target satisfies:
and can be sharpened to under third-derivative control (Shen et al., 17 Nov 2025).
Decreasing Step Size
With a nonincreasing sequence , RLMC achieves
in test-function metrics, and rates in for sufficiently smooth (Shen et al., 17 Nov 2025, Li et al., 17 Nov 2025).
KL and Total Variation Complexity
Applying Malliavin calculus and anticipative Girsanov arguments, RLMC achieves near-optimal query complexity for -accurate KL error:
where is the condition number. This matches or surpasses the best previously known rates, breaking the barrier of Euler-type methods (Zhang, 17 Jul 2025).
4. Randomized Midpoint and Poisson Variants
Randomized Midpoint
RLMC uses a single uniform random point per step to evaluate the drift, producing higher weak order accuracy (bias , variance ) than deterministic midpoint or Euler methods (Shen et al., 2019, Yu et al., 2023):
- Overdamped: bias
- Underdamped: bias (He et al., 2020, Cao et al., 2020)
Poisson Randomized Midpoint LMC (PRLMC)
PRLMC introduces a Poisson-distributed number of random midpoints (Bernoulli or uniform selection over midpoints per interval) to further debias the integrated drift. As , PRLMC approaches a true Poisson randomization, producing unbiased step corrections and potentially lower discretization bias (Shen et al., 17 Nov 2025, Kandasamy et al., 27 May 2024).
Parallelization
Splitting intervals into subintervals, parallelized RLMC (pRLMC) aggregates independent midpoint tasks per iteration. This architecture enables significant wall-clock speedup without impacting convergence rates, particularly valuable in high-dimensional settings (Yu et al., 22 Feb 2024).
5. Nonasymptotic Analysis and Practical Implications
Recent advances established tight nonasymptotic bounds for RLMC and PRLMC in various metrics under minimal smoothness:
- Wasserstein-2 rates of or for projected/tamed variants without global Lipschitzness (Wang et al., 30 Sep 2025)
- Query complexity or in for strongly convex problems (Yu et al., 2023, Yu et al., 22 Feb 2024)
- Rigorous confidence intervals for statistical estimates via CLTs (He et al., 2020)
- Empirical superiority in convergence versus ULA and deterministic midpoint, particularly for diffusion model sampling and score-based generative models (Kandasamy et al., 27 May 2024)
6. Comparison with Other Langevin Integrators
| Method | Complexity (KL/) | Bias Order |
|---|---|---|
| ULA/Euler | ||
| RLMC | (overdamped) | |
| PRLMC (Poisson) | (W) | (underdamped) |
| Verlet+Midpoint | ||
| Parallel RLMC | wall-clock time |
RLMC and its Poisson and parallel variants offer quantifiable improvements in strong and weak error rates, dimensional scaling, and practical runtime, especially compared to Euler–Maruyama and deterministic midpoint schemes (Zhang, 17 Jul 2025, He et al., 2020, Yu et al., 22 Feb 2024).
7. Open Directions and Extensions
Ongoing research explores RLMC under nonconvexity, with dissipativity and LSI replacing log-concavity (Wang et al., 30 Sep 2025), extension to manifold Langevin sampling, and analysis in statistical inference tasks where ergodicity and higher-order bias impact sample quality. Further generalizations involve tamed and projected variants for unbounded drifts, and double-midpoint constructions for kinetic Langevin dynamics requiring third-order regularity.
Recent results point towards RLMC as an optimal integrator for a range of stochastic sampling problems, with minimal assumptions and strong complexity guarantees, bridging theoretical advances with empirical acceleration in applications to high-dimensional Bayesian inference and generative modeling (Shen et al., 17 Nov 2025, Kandasamy et al., 27 May 2024).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free