On the Discrepancy of Jittered Sampling (1510.00251v1)
Abstract: We study the discrepancy of jittered sampling sets: such a set $\mathcal{P} \subset [0,1]d$ is generated for fixed $m \in \mathbb{N}$ by partitioning $[0,1]d$ into $md$ axis aligned cubes of equal measure and placing a random point inside each of the $N = md$ cubes. We prove that, for $N$ sufficiently large, $$ \frac{1}{10}\frac{d}{N{\frac{1}{2} + \frac{1}{2d}}} \leq \mathbb{E} D_N*(\mathcal{P}) \leq \frac{\sqrt{d} (\log{N}){\frac{1}{2}}}{N{\frac{1}{2} + \frac{1}{2d}}},$$ where the upper bound with an unspecified constant $C_d$ was proven earlier by Beck. Our proof makes crucial use of the sharp Dvoretzky-Kiefer-Wolfowitz inequality and a suitably taylored Bernstein inequality; we have reasons to believe that the upper bound has the sharp scaling in $N$. Additional heuristics suggest that jittered sampling should be able to improve known bounds on the inverse of the star-discrepancy in the regime $N \gtrsim dd$. We also prove a partition principle showing that every partition of $[0,1]d$ combined with a jittered sampling construction gives rise to a set whose expected squared $L2-$discrepancy is smaller than that of purely random points.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.