Refined Discrepancy Estimates
- Refined discrepancy estimates are advanced quantitative bounds that improve classical worst-case measures by incorporating averaging techniques and structure-dependent criteria.
- They enhance error bounds in quasi–Monte Carlo integration by employing Lp, BMO, and exponential Orlicz norms to precisely measure uniformity of point distributions.
- Applications span high-dimensional random constructions, explicit sequence analysis, and acceptance–rejection sampling to optimize computational methods.
Refined discrepancy estimates are advanced quantitative bounds on the distributional irregularity, or "discrepancy," of finite and infinite point sets and sequences in multidimensional unit cubes or under various geometric or statistical transformations. These estimates go beyond classical worst-case (sup-norm) bounds to provide improved rates under averaging, moment, weighted, or structure-dependent criteria, frequently employing advanced techniques from harmonic analysis, probabilistic combinatorics, and Diophantine approximation.
1. Fundamental Notions and Discrepancy Functionals
Let be a point set. The star discrepancy measures the largest deviation, over axis-parallel boxes anchored at the origin, between the empirical measure and Lebesgue measure: Associated functionals include -discrepancy, BMO, and exponential Orlicz norms, as well as generalizations to half-space, -averaged, and smooth-weighted settings. Uniformity in multidimensional distribution is crucial for numerical integration and quasi–Monte Carlo (QMC), where low discrepancy guarantees optimal error rates for bounded-variation integrands (Bilyk et al., 2014, Amirkhanyan et al., 2013, Kritzinger et al., 2015).
2. Polylogarithmic and Averaged Bounds: From Supremum to and
Classical discrepancy theory gives lower bounds (Roth, Schmidt) and, for optimal explicit constructions, (Larcher, 2014). Recent refined estimates focus on improved asymptotics under averaging:
- Half-Space Discrepancy: For the cube , there exists a set of points such that
where is the half-space discrepancy and is dimension-dependent (Chen et al., 2010). This generalizes Beck–Chen’s planar bound and surpasses polynomial sup-norm bounds by exploiting averaging.
- -Discrepancy and Symmetrization: For symmetrized van der Corput sequence ,
optimal by Roth–Proinov theory, proven by precise Haar coefficient estimates and Littlewood–Paley theory (Kritzinger et al., 2015).
3. High-Dimensional and Randomized Bounds
Standard probabilistic constructions achieve
for i.i.d. samples (HNWW). Refinements yield sharper constants and explicit probability-dependent bounds (Löbbe, 2014, Pasing et al., 2018):
- For lacunary sequences with uniform,
holds with explicit and failure probability (Löbbe, 2014).
- The explicit constant in the high-dimensional random construction for is now attainable via improved bracketing/covering number bounds (Pasing et al., 2018).
4. Orlicz, BMO, and Endpoint Discrepancy Spaces
Refined results probe discrepancy's integrability and tail behavior:
- Exponential Orlicz Estimates: For -point sets ,
holds for Skriganov–Chen-style digital nets with random shift (Amirkhanyan et al., 2013). This rate is sharp and interpolates classical and sup-norms.
- BMO and Product-Orlicz in : Order-2 digital nets satisfy
which lies precisely at the critical endpoint between and (Bilyk et al., 2014).
These results are central to understanding tail decay of the discrepancy function and its role in QMC integration, particularly for function classes at the boundary of .
5. Refined Estimates for Structured, Indexed, and Hybrid Sequences
- Explicit Sequences with Sharp Rates: The concatenated inverse-prime sequence defined by blockwise inverses modulo primes satisfies
and this is best possible for the construction (Lind, 2021).
- Index-Transformed Sequences: For sequences indexed by sum-of-digits or smooth functions,
these bounds quantify the uniformity loss under irregular indexing and generalize to multidimensional and digital settings (Kritzer et al., 2014).
- Hybrid and Metrological Constructions: Metrical or average-case theorems establish that for almost all parameters in Kronecker, digital, or hybrid sequences, discrepancy can be reduced to
with lower bounds matching up to a factor (Larcher, 2014).
6. Discrepancy Beyond Axis-Aligned Boxes and Halfspaces
- Acceptance–Rejection and Stratified Sampling: Acceptance–rejection samples using stratified (or net-based) drivers admit improved discrepancy rates:
and, for driver nets with boundary cover number ,
with exponents optimized by smoothness or geometric complexity of the acceptance region (Zhu et al., 2014).
- Markov Chain Quasi–Monte Carlo: For variance-bounding chains and suitable deterministic drivers, star-discrepancy decays as , approaching with the anywhere-to-anywhere property (Dick et al., 2013).
7. Smooth Discrepancy and Diophantine Approximation
Recent advances in "smooth discrepancy" investigate weighted test functions of high regularity. For the -dimensional Kronecker sequence: where encodes Diophantine properties of and (Chow et al., 25 Sep 2024). In and , for suitably badly approximable , the smooth discrepancy is bounded or grows only logarithmically, but for , the problem is tied to Littlewood's conjecture: unbounded smooth discrepancy would imply the conjecture holds in that dimension.
Table: Key Refined Discrepancy Estimates
| Setting | Refined Discrepancy Rate | Reference |
|---|---|---|
| half-space (cube in ) | (Chen et al., 2010) | |
| High-dim random star () | , | (Pasing et al., 2018) |
| Symmetrized van der Corput | (, ) | (Kritzinger et al., 2015) |
| Exponential Orlicz/BMO () | (Amirkhanyan et al., 2013, Bilyk et al., 2014) | |
| Stratified acceptance-rej. | (Zhu et al., 2014) | |
| Indexed sequences | or | (Kritzer et al., 2014) |
| Smooth Kronecker | (Diophantine-dependent) | (Chow et al., 25 Sep 2024) |
| Explicit $1$-dim seq. (inv-primes) | (Lind, 2021) |
These advances collectively demonstrate that, by exploiting averaging, functional-analytic, combinatorial, and arithmetic structures, refined discrepancy estimates can dramatically improve the theoretical and practical bounds on uniformity of point sets in high-dimensional computational and analytic settings.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free