Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
86 tokens/sec
Gemini 2.5 Pro Premium
43 tokens/sec
GPT-5 Medium
19 tokens/sec
GPT-5 High Premium
30 tokens/sec
GPT-4o
93 tokens/sec
DeepSeek R1 via Azure Premium
88 tokens/sec
GPT OSS 120B via Groq Premium
441 tokens/sec
Kimi K2 via Groq Premium
234 tokens/sec
2000 character limit reached

Multilevel Richardson-Romberg extrapolation (1401.1177v4)

Published 6 Jan 2014 in math.PR

Abstract: We propose and analyze a Multilevel Richardson-Romberg (MLRR) estimator which combines the higher order bias cancellation of the Multistep Richardson-Romberg method introduced in [Pa07] and the variance control resulting from the stratification introduced in the Multilevel Monte Carlo (MLMC) method (see [Hei01, Gi08]). Thus, in standard frameworks like discretization schemes of diffusion processes, the root mean squared error (RMSE) $\varepsilon > 0$ can be achieved with our MLRR estimator with a global complexity of $\varepsilon{-2} \log(1/\varepsilon)$ instead of $\varepsilon{-2} (\log(1/\varepsilon))2$ with the standard MLMC method, at least when the weak error $\mathbf{E}[Y_h]-\mathbf{E}[Y_0]$ of the biased implemented estimator $Y_h$ can be expanded at any order in $h$ and $|Y_h - Y_0|_2 = O(h{\frac{1}{2}})$. The MLRR estimator is then halfway between a regular MLMC and a virtual unbiased Monte Carlo. When the strong error $|Y_h - Y_0|_2 = O(h{\frac{\beta}{2}})$, $\beta < 1$, the gain of MLRR over MLMC becomes even more striking. We carry out numerical simulations to compare these estimators in two settings: vanilla and path-dependent option pricing by Monte Carlo simulation and the less classical Nested Monte Carlo simulation.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.