Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gain coefficients for scrambled Halton points (2308.08035v1)

Published 15 Aug 2023 in math.NA, cs.NA, and stat.CO

Abstract: Randomized quasi-Monte Carlo, via certain scramblings of digital nets, produces unbiased estimates of $\int_{[0,1]d}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}$ with a variance that is $o(1/n)$ for any $f\in L2[0,1]d$. It also satisfies some non-asymptotic bounds where the variance is no larger than some $\Gamma<\infty$ times the ordinary Monte Carlo variance. For scrambled Sobol' points, this quantity $\Gamma$ grows exponentially in $d$. For scrambled Faure points, $\Gamma \leqslant \exp(1)\doteq 2.718$ in any dimension, but those points are awkward to use for large $d$. This paper shows that certain scramblings of Halton sequences have gains below an explicit bound that is $O(\log d)$ but not $O( (\log d){1-\epsilon})$ for any $\epsilon>0$ as $d\to\infty$. For $6\leqslant d\leqslant 106$, the upper bound on the gain coefficient is never larger than $3/2+\log(d/2)$.

Summary

We haven't generated a summary for this paper yet.