Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The power of online thinning in reducing discrepancy (1608.02895v4)

Published 9 Aug 2016 in math.PR and cs.DS

Abstract: Consider an infinite sequence of independent, uniformly chosen points from $[0,1]d$. After looking at each point in the sequence, an overseer is allowed to either keep it or reject it, and this choice may depend on the locations of all previously kept points. However, the overseer must keep at least one of every two consecutive points. We call a sequence generated in this fashion a \emph{two-thinning} sequence. Here, the purpose of the overseer is to control the discrepancy of the empirical distribution of points, that is, after selecting $n$ points, to reduce the maximal deviation of the number of points inside any axis-parallel hyper-rectangle of volume $A$ from $nA$. Our main result is an explicit low complexity two-thinning strategy which guarantees discrepancy of $O(\log{2d+1} n)$ for all $n$ with high probability (compare with $\Theta(\sqrt{n\log\log n})$ without thinning). The case $d=1$ of this result answers a question of Benjamini. We also extend the construction to achieve the same asymptotic bound for ($1+\beta$)-thinning, a set-up in which rejecting is only allowed with probability $\beta$ independently for each point. In addition, we suggest an improved and simplified strategy which we conjecture to guarantee discrepancy of $O(\log{d+1} n)$ (compare with $\theta(\logd n)$, the best known construction of a low discrepancy sequence). Finally, we provide theoretical and empirical evidence for our conjecture, and provide simulations supporting the viability of our construction for applications.

Citations (18)

Summary

We haven't generated a summary for this paper yet.