Strong Partition Principle for Star Discrepancy
- The strong partition principle is a concept that compares expected star discrepancy between random sampling and stratified methods, proving that structured partitions reduce discrepancy.
- It employs techniques like discrete δ-covers and Bernstein’s inequality to quantitatively establish that jittered and convex equivolume sampling yield lower variance than Monte Carlo methods.
- Practical implications include improved error control in numerical integration and quasi-Monte Carlo methods, with empirical gains of 10%–30% over traditional random sampling.
The strong partition principle for star discrepancy establishes a rigorous inequality between the expected uniformity of finite point sets generated by random and stratified sampling schemes in . Specifically, it asserts that for any partition of the unit cube into equal-measure cells, drawing one uniformly random point in each cell yields a point set whose expected star discrepancy is strictly smaller than that produced by independent uniformly sampled points (classic Monte Carlo). Recent research has extended and quantified this principle, showing not only monotonic improvement from grid (jittered) stratification but also further gains from refined convex equivolume partitions, effectively resolving major open questions in the star discrepancy literature (Xu et al., 29 Dec 2025, Xu et al., 25 Dec 2025, Xian et al., 2023, Xian et al., 2022).
1. Star Discrepancy: Definition and Importance
For a point set , the star discrepancy is defined by
where denotes the Lebesgue measure of the anchored rectangle .
Star discrepancy quantifies the maximal difference between the empirical measure of and the uniform measure, measured over all axis-parallel anchored boxes. It serves as a canonical metric for uniformity in applications such as numerical integration, randomized quasi-Monte Carlo methods, and the analysis of irregularities of distribution.
2. Sampling Schemes: Random, Jittered, and Convex Partition Stratification
Three principal sampling paradigms are considered:
- Simple Random Sampling (): points drawn i.i.d. uniformly from .
- Jittered Sampling (): is subdivided into congruent axis-aligned cubes. One point is sampled uniformly from each cube.
- Convex Equivolume Partition Sampling (): is partitioned into convex regions of equal volume, with exactly two adjacent cells on a $2$-face merged and split obliquely (parameterized by an angle ); one point is sampled uniformly from each cell.
Each stratification step (from random, to grid-based, to convex-refined) exerts a quantitative reduction in the variance of empirical counts over test boxes, directly improving the expected star discrepancy of the resulting point set (Xu et al., 29 Dec 2025).
3. Statement and Proof Outline of the Strong Partition Principle
Let be any partition of into cells of volume $1/N$. For , where each is uniform on , and for , with i.i.d. uniform on , the strong partition principle states: (Xu et al., 25 Dec 2025).
The proof proceeds through discrete -covers to reduce the supremum over all anchored rectangles to a finite set; variance comparisons show that stratified sums incur strictly reduced variance on boundary cells versus the binomial variance of Monte Carlo. Bernstein's inequality converts variance reductions into strictly smaller tail probabilities for each test box, and integration over thresholds gives a strict expectation inequality.
4. Quantitative Results and Comparison Among Schemes
The strong partition principle has been strengthened to a strict chain of expected discrepancy inequalities: for , , and as above, and rendering the left inequality strict (Xu et al., 29 Dec 2025).
Explicit upper bounds are obtained:
- For simple random sampling:
- For jittered sampling:
- For convex equivolume partitions:
where is a strictly negative function for certain , leading to strictly better asymptotics for compared to . For jittered sampling, recent work has additionally improved the constants in the expected bound (Xu et al., 25 Dec 2025).
5. Resolution of Open Questions and Theoretical Significance
Kiderlen and Pausinger (2021) posed whether a convex equivolume partition could yield strictly lower expected star discrepancy than jittered sampling. The new results definitively resolve this, demonstrating that even small convex refinements to the grid partition enable strictly stronger variance reductions, and hence lower expected discrepancy (Xian et al., 2022, Xu et al., 29 Dec 2025, Xu et al., 25 Dec 2025).
Moreover, the principle generalizes from -discrepancy to star discrepancy, extending the scope of previous results and providing a unified framework for stratification-based discrepancy reduction.
6. Practical Implications and Extensions
Empirical findings confirm the theoretical prediction: stratified sampling consistently yields lower average star discrepancy than independent random sampling, with improvement ratios of – in moderate dimensions. Improved bounds are also much tighter than previous estimates, enabling more accurate error control in randomized quasi-Monte Carlo integration (Xu et al., 25 Dec 2025).
Potential extensions include:
- Exploring optimal (not necessarily grid-like) partitions adapted to specific test sets.
- Hybridization with deterministic low-discrepancy constructions for further variance and discrepancy reductions.
- Generalization to other discrepancy metrics (, weighted discrepancies).
- Algorithmic design for scalable near-optimal partitions in high dimensions.
7. Summary Table: Sampling Schemes and Expected Star Discrepancy Bounds
| Sampling Scheme | Partition Type | Expected Star Discrepancy Bound |
|---|---|---|
| Simple Random Sampling () | None (i.i.d. uniform) | |
| Jittered Sampling () | Axis-aligned subcubes | |
| Convex Equivolume Partition () | Grid + refined convex cells |
The table summarizes the hierarchy of expected star discrepancy bounds under increasingly structured sampling schemes, illustrating the strict inequalities realized by the strong partition principle (Xu et al., 29 Dec 2025, Xu et al., 25 Dec 2025, Xian et al., 2023).
References
- “On a Class of Partitions with Lower Expected Star Discrepancy and Its Upper Bound than Jittered Sampling” (Xu et al., 29 Dec 2025)
- “Expected star discrepancy based on stratified sampling” (Xu et al., 25 Dec 2025)
- “On the lower expected star discrepancy for jittered sampling than simple random sampling” (Xian et al., 2023)
- “Star discrepancy for new stratified random sampling I: optimal expected star discrepancy” (Xian et al., 2022)