Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
42 tokens/sec
GPT-5 Medium
20 tokens/sec
GPT-5 High Premium
27 tokens/sec
GPT-4o
100 tokens/sec
DeepSeek R1 via Azure Premium
86 tokens/sec
GPT OSS 120B via Groq Premium
464 tokens/sec
Kimi K2 via Groq Premium
181 tokens/sec
2000 character limit reached

Batch means and spectral variance estimators in Markov chain Monte Carlo (0811.1729v5)

Published 11 Nov 2008 in math.ST and stat.TH

Abstract: Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, for the batch means and overlapping batch means methods we establish conditions ensuring consistency in the mean-square sense which in turn allows us to calculate the optimal batch size up to a constant of proportionality. Finally, we examine the empirical finite-sample properties of spectral variance and batch means estimators and provide recommendations for practitioners.

Citations (249)

Summary

  • The paper demonstrates the strong and mean-square consistency of non-overlapping and overlapping batch means, and spectral variance estimators under weaker, geometrically ergodic conditions.
  • It shows that the optimal batch size to minimize MSE is proportional to n^(1/3), providing actionable guidance for efficient variance estimation in MCMC.
  • Empirical results using AR(1) and Bayesian probit models reveal that the Tukey–Hanning window enhances spectral variance estimator accuracy, especially in high-correlation scenarios.

Overview of Batch Means and Spectral Variance Estimators in Markov Chain Monte Carlo

This paper by James M. Flegal and Galin L. Jones addresses a critical aspect of Markov Chain Monte Carlo (MCMC) methodology: estimating the variance of the asymptotic normal distribution of the Monte Carlo error. Specifically, it compares two methods for this estimation—batch means and spectral variance estimators—and establishes conditions under which these estimators are consistent.

Theoretical Contributions

The authors provide important contributions to the theoretical understanding of these estimators. They establish the strong consistency of non-overlapping batch means (BM), overlapping batch means (OBM), and spectral variance (SV) estimators under conditions that are weaker than those previously known. Specifically, the authors require the Markov chain to be geometrically ergodic instead of uniformly ergodic, which is a less restrictive condition. Notably, for SV estimators, this represents a relaxation of prior assumptions, as these methods typically demand a stronger moment condition.

Strong and Mean-Square Consistency

The paper proves that OBM and SV estimators are strongly consistent for geometrically ergodic Markov chains, even when the chains are not stationary, eliminating the need for a burn-in period. The authors also demonstrate mean-square consistency, showing that the estimators' mean squared error (MSE) converges to zero. Their analysis reveals that, asymptotically, the optimal batch size, minimizing the MSE, is proportional to n1/3n^{1/3}.

Empirical Analysis

The empirical portion of the paper evaluates the finite-sample performance of the BM, OBM, and SV methods in two example scenarios: an AR(1) model and a Bayesian probit regression model. The authors show that the choice of estimator can impact the accuracy of coverage probabilities, particularly in high-correlation settings. They recommend using the Tukey–Hanning window for SV methods due to its superior performance across several scenarios.

Implications for Practice

The practical implications of this research are significant. By demonstrating weaker conditions for consistency, the authors provide practitioners with more flexible tools for variance estimation in MCMC. This work allows for more reliable estimation of Monte Carlo standard errors and informs decisions about simulation termination. The relaxation of consistency conditions suggests a broad applicability to different MCMC settings, particularly those involving complex posterior distributions like those encountered in Bayesian inference.

Future Directions

This research opens several avenues for further exploration. One area is optimizing the computational load of overlapping batch means relative to their improved accuracy over non-overlapping versions. Additionally, further empirical studies could solidify the proposed approach's advantages and limitations across different types of data and model complexities.

In summary, this paper presents fundamental advancements in the estimation of Monte Carlo error variance via BM and SV methods, making significant theoretical strides while providing practical guidance for statisticians and data scientists using MCMC methodologies.