Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 69 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 42 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 402 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Diffusion Limit For The Random Walk Metropolis Algorithm Out Of stationarity (1405.4896v2)

Published 19 May 2014 in math.PR

Abstract: The Random Walk Metropolis (RWM) algorithm is a Metropolis- Hastings MCMC algorithm designed to sample from a given target distribution \pi with Lebesgue density on RN. RWM constructs a Markov chain by randomly proposing a new position (the "proposal move"), which is then accepted or rejected according to a rule which makes the chain reversible with respect to \pi. When the dimension N is large a key question is to determine the optimal scaling with N of the proposal variance: if the proposal variance is too large, the algorithm will reject the proposed moves too often; if it is too small, the algorithm will explore the state space too slowly. Determining the optimal scaling of the proposal variance gives a measure of the cost of the algorithm as well. One approach to tackle this issue, which we adopt here, is to derive diffusion limits for the algorithm. Such an approach has been proposed in the seminal papers [RGG97, RR98]; in particular in [RGG97] the authors derive a diffusion limit for the RWM algorithm under the two following assumptions: i) the algorithm is started in stationarity; ii) the target measure $\pi$ is in product form. The present paper considers the situation of practical interest in which both assumptions i) and ii) are removed. That is a) we study the case (which occurs in practice) in which the algorithm is started out of stationarity and b) we consider target measures which are in non-product form. The target measures that we consider arise in Bayesian nonparametric statistics and in the study of conditioned diffusions. We prove that, out of stationarity, the optimal scaling for the proposal variance is O(N), as it is in stationarity. Notice that the optimal scaling in and out of stationatity need not be the same in general, and indeed they differ e.g. in the case of the MALA algorithm [KOS16].

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube