Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Variance Estimation in Adaptive Sequential Monte Carlo (1909.13602v2)

Published 30 Sep 2019 in math.ST, math.PR, stat.CO, and stat.TH

Abstract: Sequential Monte Carlo (SMC) methods represent a classical set of techniques to simulate a sequence of probability measures through a simple selection/mutation mechanism. However, the associated selection functions and mutation kernels usually depend on tuning parameters that are of first importance for the efficiency of the algorithm. A standard way to address this problem is to apply Adaptive Sequential Monte Carlo (ASMC) methods, which consist in exploiting the information given by the history of the sample to tune the parameters. This article is concerned with variance estimation in such ASMC methods. Specifically, we focus on the case where the asymptotic variance coincides with the one of the "limiting" Sequential Monte Carlo algorithm as defined by Beskos et al. (2016). We prove that, under natural assumptions, the estimator introduced by Lee and Whiteley (2018) in the nonadaptive case (i.e., SMC) is also a consistent estimator of the asymptotic variance for ASMC methods. To do this, we introduce a new estimator that is expressed in terms of coalescent tree-based measures, and explain its connection with the previous one. Our estimator is constructed by tracing the genealogy of the associated Interacting Particle System. The tools we use connect the study of Particle Markov Chain Monte Carlo methods and the variance estimation problem in SMC methods. As such, they may give some new insights when dealing with complex genealogy-involved problems of Interacting Particle Systems in more general scenarios.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.