Metropolis-Adjusted Langevin Algorithm (MALA)
- MALA is a gradient-informed MCMC method that leverages Euler discretization and a Metropolis–Hastings correction to sample from complex distributions.
- It employs discretized Langevin dynamics, using potential function gradients to guide proposals toward regions of high posterior density.
- The method's nonasymptotic analysis guarantees exponential convergence to equilibrium even under nonglobally Lipschitz conditions.
The Metropolis-Adjusted Langevin Algorithm (MALA) is a Markov Chain Monte Carlo (MCMC) method that incorporates gradient information from the target density to propose new states, correcting discretization bias via a Metropolis–Hastings accept–reject step. MALA is designed to sample efficiently from complex high-dimensional distributions, as it leverages the intuition that a discretized Langevin diffusion—a stochastic process with π as its invariant measure—guides proposals toward regions of high posterior density. MALA’s theoretical and practical properties have been extensively studied, particularly in the context of target distributions that arise from stochastic differential equations (SDEs), Bayesian inverse problems, and high-dimensional statistics.
1. Mathematical Formulation and Theoretical Setting
At its core, MALA can be viewed as a Metropolis–Hastings scheme with proposals based on an Euler discretization of an overdamped Langevin SDE: where is the potential function (typically the negative log-density), is an inverse temperature parameter, and is a standard Brownian motion. The corresponding invariant density is .
MALA’s proposal at state is given by: where is the time-step parameter. The proposed state is accepted with probability
where is the transition density of the proposal kernel defined by this Euler scheme.
The chain thus constructed exactly preserves (the SDE’s invariant measure) as its stationary distribution.
2. Nonasymptotic Mixing and Ergodicity
A central result of the referenced work (1008.3514) is the nonasymptotic quantification of MALA's rate of convergence to equilibrium, even when the drift is not globally Lipschitz—an important practical scenario. Despite the lack of a uniform spectral gap in such cases, the Metropolis–Hastings correction enables MALA to achieve strong ergodic properties by "patching" the instability in high-energy regions where discretization alone would lead to divergence.
The main theorem states that if denotes the MALA transition kernel over unit time and is the invariant measure,
where is a Lyapunov function, is a contraction factor, is the time step, and are constants depending on , .
This bound demonstrates exponential decay in total variation distance with respect to (number of unit-time steps), up to an error term that is exponentially small in . The analysis crucially leverages Lyapunov techniques and a "patching argument": the state space is divided into a compact low-energy region (where uniform minorization conditions can be established) and its complement, with high-energy proposals controlled by rapidly decaying tails of .
Assumptions on required by the analysis include:
- Quadratic growth at infinity ();
- An inequality controlling in terms of and ;
- A one-sided Lipschitz condition on ;
- Higher derivatives of bounded in terms of .
3. Practical and Algorithmic Implications
The derived nonasymptotic bounds provide direct quantitative guidance for simulation practice:
- For any sufficiently small step size , the lack of uniform spectral gap is essentially negligible due to the exponentially small error .
- The algorithm remains robust even when the forward Euler proposals would diverge in the absence of a Metropolization step. The rejection mechanism "cleans up" these problematic proposals, maintaining both stability and correct invariant measure.
- Because convergence to equilibrium is quantified in total variation distance and is nonasymptotic, users may choose so that the error term is below the desired numerical tolerance for simulation horizons of interest.
- The bounds, however, are uniform only on initial conditions with , justifying a focus on "localized" state spaces in calculations. In practice, high-energy regions contribute little due to the rapid decay of .
4. Comparison with Unadjusted Euler and Related Methods
MALA markedly outperforms unadjusted Euler-based schemes for SDEs with nonglobally Lipschitz drifts. While forward Euler discretization alone may be unstable or non-ergodic, the Metropolis–Hastings adjustment ensures ergodicity and preserves the invariant measure exactly. The critical distinction is that MALA’s error term is exponentially small in , while unadjusted Euler typically suffers an error in the invariant distribution.
Compared to more general Metropolis–Hastings algorithms, MALA is more efficient on a broad class of SDE-inspired targets because it exploits gradient information through proposals informed by the underlying geometry of the target.
5. Implementation Considerations
When implementing MALA in systems with non-globally Lipschitz drifts, practitioners should:
- Set the step size as small as computational resources permit, targeting exponential suppression of the finite-time error;
- Monitor rare but plausible excursions into high-energy regions, recognizing that in practice these will be infrequently visited according to the tail of ;
- Employ suitable Lyapunov drift diagnostics in simulations to detect potential failures of assumptions in extreme regimes;
- Favor MALA over uncorrected Euler algorithms whenever the gradient of the log-density fails to be globally Lipschitz, as the Metropolis–Hastings correction ensures robust long-term convergence.
6. Summary and Broader Impact
The nonasymptotic mixing analysis of MALA demonstrates that, even in the absence of a spectral gap for the discretized dynamics, MALA inherits nearly all ergodic benefits of the underlying SDE thanks to the Metropolis–Hastings correction. This insight extends the practical guarantees of MALA to a wide class of applied problems—including statistical mechanics, Bayesian computation, and stochastic modeling—where only local regularity and tail conditions on the potential are available. Theoretical results, coupled with practical algorithmic strategies derived from Lyapunov and minorization arguments, ensure that MALA remains a robust and efficient sampler across challenging, high-dimensional settings.