Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Brief Review of Optimal Scaling of the Main MCMC Approaches and Optimal Scaling of Additive TMCMC Under Non-Regular Cases (1405.0913v5)

Published 5 May 2014 in stat.CO

Abstract: Very recently, Transformation based Markov Chain Monte Carlo (TMCMC) was proposed by Dutta and Bhattcharya (2013) as a much efficient alternative to the Metropolis-Hastings algorithm, Random Walk Metropolis (RWM) algorithm, especially in high dimensions. The main advantage of this algorithm is that it simultaneously updates all components of a high dimensional parameter by some appropriate deterministic transformation of a single random variable, thereby reducing time complexity and enhancing the acceptance rate. The optimal scaling of the additive TMCMC approach has already been studied for the Gaussian proposal density by Dey and Bhattacharya(2013). In this paper, we discuss diffusion-based optimal scaling behavior for non-Gaussian proposal densities - in particular, uniform, Student's t and Cauchy proposals. We also consider diffusion based optimal scaling for non-Gaussian proposals when the target density is discontinuous. In the case of the Random Walk metropolis (RWM) algorithm these non-regular situations have been studied by Neal and Roberts (2011) in terms of expected squared jumping distance (ESJD), but the diffusion based approach has not been considered. Although we could not formally prove our diffusion result for the Cauchy proposal, simulation based results led us to a conjecture that the diffusion result still holds for the Cauchy case. We compare our diffusion based TMCMC approach with that of ESJD based RWM approach for the very challenging Cauchy proposal case, showing that our former approach clearly outperforms the latter.

Summary

We haven't generated a summary for this paper yet.