Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
86 tokens/sec
Gemini 2.5 Pro Premium
43 tokens/sec
GPT-5 Medium
19 tokens/sec
GPT-5 High Premium
30 tokens/sec
GPT-4o
93 tokens/sec
DeepSeek R1 via Azure Premium
88 tokens/sec
GPT OSS 120B via Groq Premium
441 tokens/sec
Kimi K2 via Groq Premium
234 tokens/sec
2000 character limit reached

Limit theorems for weighted and regular Multilevel estimators (1611.05275v1)

Published 16 Nov 2016 in math.PR

Abstract: We aim at analyzing in terms of a.s. convergence and weak rate the performances of the Multilevel Monte Carlo estimator (MLMC) introduced in [Gil08] and of its weighted version, the Multilevel Richardson Romberg estimator (ML2R), introduced in [LP14]. These two estimators permit to compute a very accurate approximation of $I_0 = \mathbb{E}[Y_0]$ by a Monte Carlo type estimator when the (non-degenerate) random variable $Y_0 \in L2(\mathbb{P})$ cannot be simulated (exactly) at a reasonable computational cost whereas a family of simulatable approximations $(Y_h)_{h \in \mathcal{H}}$ is available. We will carry out these investigations in an abstract framework before applying our results, mainly a Strong Law of Large Numbers and a Central Limit Theorem, to some typical fields of applications: discretization schemes of diffusions and nested Monte Carlo.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube