Papers
Topics
Authors
Recent
2000 character limit reached

Strong Partition Principle for Star Discrepancy

Updated 31 December 2025
  • The strong partition principle is a concept that compares expected star discrepancy between random sampling and stratified methods, proving that structured partitions reduce discrepancy.
  • It employs techniques like discrete δ-covers and Bernstein’s inequality to quantitatively establish that jittered and convex equivolume sampling yield lower variance than Monte Carlo methods.
  • Practical implications include improved error control in numerical integration and quasi-Monte Carlo methods, with empirical gains of 10%–30% over traditional random sampling.

The strong partition principle for star discrepancy establishes a rigorous inequality between the expected uniformity of finite point sets generated by random and stratified sampling schemes in [0,1]d[0,1]^d. Specifically, it asserts that for any partition of the unit cube into NN equal-measure cells, drawing one uniformly random point in each cell yields a point set whose expected star discrepancy is strictly smaller than that produced by NN independent uniformly sampled points (classic Monte Carlo). Recent research has extended and quantified this principle, showing not only monotonic improvement from grid (jittered) stratification but also further gains from refined convex equivolume partitions, effectively resolving major open questions in the star discrepancy literature (Xu et al., 29 Dec 2025, Xu et al., 25 Dec 2025, Xian et al., 2023, Xian et al., 2022).

1. Star Discrepancy: Definition and Importance

For a point set PN={t1,,tN}[0,1]dP_N = \{t_1,\dots,t_N\} \subset [0,1]^d, the star discrepancy is defined by

DN(PN)=supx[0,1]dλ([0,x])1Nn=1N1[0,x](tn),D^*_N(P_N) = \sup_{x \in [0,1]^d} \left| \lambda([0,x]) - \frac{1}{N} \sum_{n=1}^N \mathbf{1}_{[0,x]}(t_n) \right|,

where λ([0,x])=i=1dxi\lambda([0,x]) = \prod_{i=1}^d x_i denotes the Lebesgue measure of the anchored rectangle [0,x]=[0,x1)××[0,xd)[0,x] = [0,x_1) \times \cdots \times [0,x_d).

Star discrepancy quantifies the maximal difference between the empirical measure of PNP_N and the uniform measure, measured over all axis-parallel anchored boxes. It serves as a canonical metric for uniformity in applications such as numerical integration, randomized quasi-Monte Carlo methods, and the analysis of irregularities of distribution.

2. Sampling Schemes: Random, Jittered, and Convex Partition Stratification

Three principal sampling paradigms are considered:

  • Simple Random Sampling (XX): NN points drawn i.i.d. uniformly from [0,1]d[0,1]^d.
  • Jittered Sampling (YY): [0,1]d[0,1]^d is subdivided into N=mdN = m^d congruent axis-aligned cubes. One point is sampled uniformly from each cube.
  • Convex Equivolume Partition Sampling (ZZ): [0,1]d[0,1]^d is partitioned into NN convex regions of equal volume, with exactly two adjacent cells on a $2$-face merged and split obliquely (parameterized by an angle θ\theta); one point is sampled uniformly from each cell.

Each stratification step (from random, to grid-based, to convex-refined) exerts a quantitative reduction in the variance of empirical counts over test boxes, directly improving the expected star discrepancy of the resulting point set (Xu et al., 29 Dec 2025).

3. Statement and Proof Outline of the Strong Partition Principle

Let Ω={Ω1,,ΩN}\Omega = \{\Omega_1, \dots, \Omega_N\} be any partition of [0,1]d[0,1]^d into cells of volume $1/N$. For W={W1,,WN}W = \{W_1, \dots, W_N\}, where each WiW_i is uniform on Ωi\Omega_i, and for Y={Y1,,YN}Y = \{Y_1, \dots, Y_N\}, with YiY_i i.i.d. uniform on [0,1]d[0,1]^d, the strong partition principle states: E[DN(W)]<E[DN(Y)]\mathbb{E}\bigl[D_N^*(W)\bigr] < \mathbb{E}\bigl[D_N^*(Y)\bigr] (Xu et al., 25 Dec 2025).

The proof proceeds through discrete δ\delta-covers to reduce the supremum over all anchored rectangles to a finite set; variance comparisons show that stratified sums incur strictly reduced variance on boundary cells versus the binomial variance of Monte Carlo. Bernstein's inequality converts variance reductions into strictly smaller tail probabilities for each test box, and integration over thresholds gives a strict expectation inequality.

4. Quantitative Results and Comparison Among Schemes

The strong partition principle has been strengthened to a strict chain of expected discrepancy inequalities: E(DN(Z))E(DN(Y))<E(DN(X))\mathbb{E}\bigl( D^*_N(Z) \bigr ) \leq \mathbb{E}\bigl( D^*_N(Y) \bigr ) < \mathbb{E}\bigl( D^*_N(X) \bigr ) for XX, YY, and ZZ as above, and θarctan(1/2)\theta \geq \arctan(1/2) rendering the left inequality strict (Xu et al., 29 Dec 2025).

Explicit upper bounds are obtained:

  • For simple random sampling:

E[DN(X)]2d+1N1/2\mathbb{E}[D^*_N(X)] \leq \frac{\sqrt{2d}+1}{N^{1/2}}

  • For jittered sampling:

E[DN(Y)]2d+1N1/2+1/(2d)\mathbb{E}[D^*_N(Y)] \leq \frac{\sqrt{2d}+1}{N^{1/2+1/(2d)}}

  • For convex equivolume partitions:

E[DN(Z)]2d+2P(θ)3d2N21/d+1N1/2+1/(2d)\mathbb{E}[D^*_N(Z)] \leq \frac{\sqrt{2d + \frac{2 P(\theta)}{3^{d-2} N^{2-1/d}} } + 1}{N^{1/2 + 1/(2d)}}

where P(θ)P(\theta) is a strictly negative function for certain θ\theta, leading to strictly better asymptotics for ZZ compared to YY. For jittered sampling, recent work has additionally improved the constants in the expected bound (Xu et al., 25 Dec 2025).

5. Resolution of Open Questions and Theoretical Significance

Kiderlen and Pausinger (2021) posed whether a convex equivolume partition could yield strictly lower expected star discrepancy than jittered sampling. The new results definitively resolve this, demonstrating that even small convex refinements to the grid partition enable strictly stronger variance reductions, and hence lower expected discrepancy (Xian et al., 2022, Xu et al., 29 Dec 2025, Xu et al., 25 Dec 2025).

Moreover, the principle generalizes from L2L_2-discrepancy to star discrepancy, extending the scope of previous results and providing a unified framework for stratification-based discrepancy reduction.

6. Practical Implications and Extensions

Empirical findings confirm the theoretical prediction: stratified sampling consistently yields lower average star discrepancy than independent random sampling, with improvement ratios of 10%10\%30%30\% in moderate dimensions. Improved bounds are also much tighter than previous estimates, enabling more accurate error control in randomized quasi-Monte Carlo integration (Xu et al., 25 Dec 2025).

Potential extensions include:

  • Exploring optimal (not necessarily grid-like) partitions adapted to specific test sets.
  • Hybridization with deterministic low-discrepancy constructions for further variance and discrepancy reductions.
  • Generalization to other discrepancy metrics (LpL_p, weighted discrepancies).
  • Algorithmic design for scalable near-optimal partitions in high dimensions.

7. Summary Table: Sampling Schemes and Expected Star Discrepancy Bounds

Sampling Scheme Partition Type Expected Star Discrepancy Bound
Simple Random Sampling (XX) None (i.i.d. uniform) 2d+1N1/2\leq \frac{\sqrt{2d} + 1}{N^{1/2}}
Jittered Sampling (YY) Axis-aligned subcubes 2d+1N1/2+1/(2d)\leq \frac{\sqrt{2d} + 1}{N^{1/2 + 1/(2d)}}
Convex Equivolume Partition (ZZ) Grid + refined convex cells 2d+2P(θ)3d2N21/d+1N1/2+1/(2d)\leq \frac{\sqrt{2d + \frac{2P(\theta)}{3^{d-2}N^{2-1/d}}} + 1}{N^{1/2 + 1/(2d)}}

The table summarizes the hierarchy of expected star discrepancy bounds under increasingly structured sampling schemes, illustrating the strict inequalities realized by the strong partition principle (Xu et al., 29 Dec 2025, Xu et al., 25 Dec 2025, Xian et al., 2023).

References

  • “On a Class of Partitions with Lower Expected Star Discrepancy and Its Upper Bound than Jittered Sampling” (Xu et al., 29 Dec 2025)
  • “Expected star discrepancy based on stratified sampling” (Xu et al., 25 Dec 2025)
  • “On the lower expected star discrepancy for jittered sampling than simple random sampling” (Xian et al., 2023)
  • “Star discrepancy for new stratified random sampling I: optimal expected star discrepancy” (Xian et al., 2022)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Strong Partition Principle for Star Discrepancy.