Papers
Topics
Authors
Recent
2000 character limit reached

Refined Discrepancy Estimates

Updated 19 November 2025
  • Refined discrepancy estimates are advanced quantitative bounds that improve classical worst-case measures by incorporating averaging techniques and structure-dependent criteria.
  • They enhance error bounds in quasi–Monte Carlo integration by employing Lp, BMO, and exponential Orlicz norms to precisely measure uniformity of point distributions.
  • Applications span high-dimensional random constructions, explicit sequence analysis, and acceptance–rejection sampling to optimize computational methods.

Refined discrepancy estimates are advanced quantitative bounds on the distributional irregularity, or "discrepancy," of finite and infinite point sets and sequences in multidimensional unit cubes or under various geometric or statistical transformations. These estimates go beyond classical worst-case (sup-norm) bounds to provide improved rates under averaging, moment, weighted, or structure-dependent criteria, frequently employing advanced techniques from harmonic analysis, probabilistic combinatorics, and Diophantine approximation.

1. Fundamental Notions and Discrepancy Functionals

Let PN={x1,,xN}[0,1)dP_N=\{x_1,\dots,x_N\}\subset [0,1)^d be a point set. The star discrepancy measures the largest deviation, over axis-parallel boxes anchored at the origin, between the empirical measure and Lebesgue measure: DN(PN)=supt[0,1]d1Ni=1N1[0,t)(xi)j=1dtj.D_N^*(P_N) = \sup_{t\in[0,1]^d}\left|\frac{1}{N}\sum_{i=1}^N \mathbf{1}_{[0,t)}(x_i) - \prod_{j=1}^d t_j\right|. Associated functionals include LpL_p-discrepancy, BMO, and exponential Orlicz norms, as well as generalizations to half-space, L1L^1-averaged, and smooth-weighted settings. Uniformity in multidimensional distribution is crucial for numerical integration and quasi–Monte Carlo (QMC), where low discrepancy guarantees optimal error rates for bounded-variation integrands (Bilyk et al., 2014, Amirkhanyan et al., 2013, Kritzinger et al., 2015).

2. Polylogarithmic and Averaged Bounds: From Supremum to L1L^1 and LpL^p

Classical discrepancy theory gives lower bounds DN(logN)d/2/ND_N^*\gtrsim (\log N)^{d/2}/N (Roth, Schmidt) and, for optimal explicit constructions, DN=O((logN)d/N)D_N^* = O((\log N)^{d}/ N) (Larcher, 2014). Recent refined estimates focus on improved asymptotics under averaging:

  • L1L^1 Half-Space Discrepancy: For the cube [1,1]d[-1,1]^d, there exists a set PP of N=(2M+1)dN=(2M+1)^d points such that

supr0Σd1Dσ,r(P)dσcd(logN)d,\sup_{r\ge 0}\int_{\Sigma_{d-1}} D_{\sigma, r}(P)\,d\sigma \leq c_d (\log N)^d,

where Dσ,rD_{\sigma, r} is the half-space discrepancy and cdc_d is dimension-dependent (Chen et al., 2010). This generalizes Beck–Chen’s planar (logN)2(\log N)^2 bound and surpasses polynomial sup-norm bounds by exploiting L1L^1 averaging.

Lp,N(Vsym)plogNN,p(1,),L_{p,N}(V_{\text{sym}}) \lesssim_p \frac{\sqrt{\log N}}{N},\quad p\in(1,\infty),

optimal by Roth–Proinov theory, proven by precise Haar coefficient estimates and Littlewood–Paley theory (Kritzinger et al., 2015).

3. High-Dimensional and Randomized Bounds

Standard probabilistic constructions achieve

DN(X)cabsd/ND_N^*(X) \lesssim \sqrt{c_{\mathrm{abs}}\cdot d/N}

for i.i.d. samples (HNWW). Refinements yield sharper constants and explicit probability-dependent bounds (Löbbe, 2014, Pasing et al., 2018):

  • For lacunary sequences xn=2n1x1x_n=\langle 2^{n-1}x_1\rangle with x1x_1 uniform,

DNC(d,δ)dlog2d/ND_N^* \leq C(d,\delta)\sqrt{d\log_2 d/N}

holds with explicit C(d,δ)C(d,\delta) and failure probability δ\delta (Löbbe, 2014).

  • The explicit constant in the high-dimensional random construction for DN9s/ND_N^* \leq 9\sqrt{s/N} is now attainable via improved bracketing/covering number bounds (Pasing et al., 2018).

4. Orlicz, BMO, and Endpoint Discrepancy Spaces

Refined results probe discrepancy's integrability and tail behavior:

  • Exponential Orlicz Estimates: For NN-point sets PN[0,1)nP_N\subset [0,1)^n,

DN(P;)exp(L2/(n+1))Cn(logN)(n1)/2\| D_N(P;\cdot)\|_{\exp(L^{2/(n+1)})} \leq C_n (\log N)^{(n-1)/2}

holds for Skriganov–Chen-style digital nets with random shift (Amirkhanyan et al., 2013). This rate is sharp and interpolates classical LpL^p and sup-norms.

  • BMO and Product-Orlicz in d3d\geq3: Order-2 digital nets PNP_N satisfy

DPNBMOd(logN)(d1)/2,DPNexp(L2/(d1))(logN)(d1)/2\|D_{P_N}\|_{\mathrm{BMO}^d} \lesssim (\log N)^{(d-1)/2}, \qquad \|D_{P_N}\|_{\exp(L^{2/(d-1)})} \lesssim (\log N)^{(d-1)/2}

which lies precisely at the critical endpoint between LpL^p and LL^\infty (Bilyk et al., 2014).

These results are central to understanding tail decay of the discrepancy function and its role in QMC integration, particularly for function classes at the boundary of LL^\infty.

5. Refined Estimates for Structured, Indexed, and Hybrid Sequences

  • Explicit Sequences with Sharp Rates: The concatenated inverse-prime sequence xnx_n defined by blockwise inverses modulo primes satisfies

DN=(2+o(1))lnNN,D_N^* = (2 + o(1)) \frac{\ln N}{N},

and this is best possible for the construction (Lind, 2021).

  • Index-Transformed Sequences: For sequences indexed by sum-of-digits or smooth functions,

DN(vdCbSq)loglogNlogN,DN(ynα)Nα(logN)s,D_N^*(\mathrm{vdC}_b \circ S_q) \asymp \frac{\log\log N}{\log N},\qquad D_N^*(\boldsymbol{y}_{\lfloor n^{\alpha}\rfloor}) \asymp N^{-\alpha}(\log N)^s,

these bounds quantify the uniformity loss under irregular indexing and generalize to multidimensional and digital settings (Kritzer et al., 2014).

  • Hybrid and Metrological Constructions: Metrical or average-case theorems establish that for almost all parameters in Kronecker, digital, or hybrid sequences, discrepancy can be reduced to

DN(logN)d(loglogN)O(1)N,D_N^* \lesssim \frac{(\log N)^{d}(\log\log N)^{O(1)}}{N},

with lower bounds matching up to a loglogN\log\log N factor (Larcher, 2014).

6. Discrepancy Beyond Axis-Aligned Boxes and Halfspaces

  • Acceptance–Rejection and Stratified Sampling: Acceptance–rejection samples using stratified (or net-based) drivers admit improved discrepancy rates:

DN=O(N1/21/(2s)),D_N^* = O(N^{-1/2-1/(2s)}),

and, for driver nets with boundary cover number Γk\Gamma_k,

DNO(Nα(logN)β),D_N^* \leq O(N^{-\alpha}(\log N)^\beta),

with exponents optimized by smoothness or geometric complexity of the acceptance region (Zhu et al., 2014).

  • Markov Chain Quasi–Monte Carlo: For variance-bounding chains and suitable deterministic drivers, star-discrepancy decays as n1/2(logn)1/2n^{-1/2}(\log n)^{1/2}, approaching n1+δn^{-1+\delta} with the anywhere-to-anywhere property (Dick et al., 2013).

7. Smooth Discrepancy and Diophantine Approximation

Recent advances in "smooth discrepancy" investigate weighted test functions of high regularity. For the dd-dimensional Kronecker sequence: Dω(α;N)ϕ(L(N)),D_{\omega}(\alpha; N) \lesssim \phi(L(N)), where ϕ\phi encodes Diophantine properties of α\alpha and L(x)ϕ(L(x))=xL(x) \phi(L(x)) = x (Chow et al., 25 Sep 2024). In d=1d=1 and d=2d=2, for suitably badly approximable α\alpha, the smooth discrepancy is bounded or grows only logarithmically, but for d3d\geq 3, the problem is tied to Littlewood's conjecture: unbounded smooth discrepancy would imply the conjecture holds in that dimension.

Table: Key Refined Discrepancy Estimates

Setting Refined Discrepancy Rate Reference
L1L^1 half-space (cube in dd) (logN)d(\log N)^d (Chen et al., 2010)
High-dim random star (dd) cd/Nc \sqrt{d/N}, c[9,10]c\in[9,10] (Pasing et al., 2018)
Symmetrized van der Corput logN/N\sqrt{\log N}/N (LpL_p, p>1p>1) (Kritzinger et al., 2015)
Exponential Orlicz/BMO (dd) (logN)(d1)/2(\log N)^{(d-1)/2} (Amirkhanyan et al., 2013, Bilyk et al., 2014)
Stratified acceptance-rej. N1/21/(2s)N^{-1/2-1/(2s)} (Zhu et al., 2014)
Indexed sequences (loglogN)s/(logN)(\log\log N)^s/(\log N) or Nα(logN)sN^{-\alpha}(\log N)^s (Kritzer et al., 2014)
Smooth Kronecker ϕ(L(N))\phi(L(N)) (Diophantine-dependent) (Chow et al., 25 Sep 2024)
Explicit $1$-dim seq. (inv-primes) 2(lnN)/N2 (\ln N)/N (Lind, 2021)

These advances collectively demonstrate that, by exploiting averaging, functional-analytic, combinatorial, and arithmetic structures, refined discrepancy estimates can dramatically improve the theoretical and practical bounds on uniformity of point sets in high-dimensional computational and analytic settings.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Refined Discrepancy Estimates.