Papers
Topics
Authors
Recent
Search
2000 character limit reached

Inverse Star Discrepancy Insights

Updated 30 January 2026
  • Inverse of the star discrepancy is defined as the minimum number of points in [0,1]^d required to achieve a target uniformity threshold, guiding the efficiency of quasi-Monte Carlo integrations.
  • Theoretical bounds show an upper limit of O(d/ε²) and a lower limit of Ω(d/ε), highlighting a critical gap in the exponent of ε that remains an open research challenge.
  • Recent algorithmic advances, including threshold-accepting, genetic algorithms, and component-by-component methods, offer practical construction strategies for near-optimal low-discrepancy point sets.

The inverse of the star discrepancy, denoted N(d,ε)N^*(d, \varepsilon), is the minimum number of points in [0,1]d[0,1]^d required to form a set whose star discrepancy does not exceed a specified threshold ε\varepsilon. This concept is fundamental in discrepancy theory, quasi-Monte Carlo methods, and high-dimensional numerical integration, serving as a quantitative gauge of how efficiently uniformity can be achieved. The sharp dependence of N(d,ε)N^*(d, \varepsilon) on both dd and ε\varepsilon encapsulates major open questions in the constructive and algorithmic aspects of low-discrepancy point set generation.

1. Formal Definition and Problem Statement

Given PN={x1,...,xN}[0,1]dP_N = \{x_1, ..., x_N\} \subset [0,1]^d, the star discrepancy is

DN(PN)=supb[0,1]d1N#{xn[0,b)}i=1dbi.D_N^*(P_N) = \sup_{b \in [0,1]^d} \left| \frac{1}{N} \#\{x_n \in [0,b)\} - \prod_{i=1}^d b_i \right|.

Define the minimal star discrepancy for NN-point sets,

D(N,d)=infPN[0,1]d,PN=NDN(PN),D^*(N, d) = \inf_{P_N \subset [0,1]^d,\,|P_N|=N} D_N^*(P_N),

and the inverse star discrepancy,

N(ε,d)=min{NN:D(N,d)ε}.N^*(\varepsilon, d) = \min \left\{ N \in \mathbb{N} : D^*(N, d) \leq \varepsilon \right\}.

The central problem is to determine, for fixed dd and ε\varepsilon, the asymptotic behavior and explicit constants in N(d,ε)N^*(d, \varepsilon) or equivalently, in bounds for D(N,d)D^*(N,d).

2. Classical Bounds and Existential Results

The landmark theorem of Heinrich, Novak, Wasilkowski, and Woźniakowski established that for some absolute constant CC,

N(d,ε)C2dε2,N^*(d, \varepsilon) \leq C^2 d \varepsilon^{-2},

with the matching star discrepancy bound D(N,d)Cd/ND^*(N, d) \leq C \sqrt{d/N} holding for some NN and dd (Aistleitner, 2012, Dick et al., 2014, Aistleitner et al., 2012). The proof utilizes the probabilistic method: uniform random sampling in [0,1]d[0,1]^d ensures, via concentration inequalities and bracketing arguments, that with positive probability a random set attains the required discrepancy bound.

Subsequent refinements (Aistleitner, Gnewuch, Pillichshammer, Wohlmuth) have iteratively reduced the best-known explicit constant CC. As of 2024, the leading constant is c=2.4631837c=2.4631837, yielding

N(ε,d)6.0665dε2N^*(\varepsilon, d) \leq 6.0665\, d\, \varepsilon^{-2}

via explicit probabilistic bracketing constructions involving scrambled Halton sequences and pp-adic discrepancy theory (Weiß, 2024).

3. Lower Bounds and Exponent Gaps

Lower bounds trace primarily to results by Hinrichs and Steinerberger, who applied VC-theory and geometric arguments to yield

N(d,ε)cdε,N^*(d, \varepsilon) \gtrsim c \frac{d}{\varepsilon},

with c=1/40c = 1/40 (or improved to $1/(9e)$ with refinements) (2207.13471). These results show that d/εd/\varepsilon-growth is unavoidable in the minimal sample size needed for star discrepancy ε\leq \varepsilon, but a polynomial gap persists between the d/εd/\varepsilon lower bound and the d/ε2d/\varepsilon^2 upper bound. The correct exponent of ε\varepsilon in N(d,ε)N^*(d, \varepsilon) remains open.

Random point set lower bounds (Doerr) show that with overwhelming probability,

D(P)Kd/N,D^*(P) \geq K \sqrt{d/N},

implying that, for the majority of sets, N(d,ε)N^*(d,\varepsilon) cannot be smaller than Ω(d/ε2)\Omega(d/\varepsilon^2) (Doerr, 2012).

4. Constructive and Algorithmic Advances

Most existential results are non-constructive and do not directly yield point sets. Several algorithmic frameworks have enabled construction of near-optimal sets:

  • Threshold-accepting and genetic algorithms: Randomized and evolutionary techniques have been developed for explicit pointset optimization, notably for generalized Halton and scrambled digital nets, achieving tight upper bounds for moderate dd and ε\varepsilon (Doerr et al., 2013).
  • Lacunary sequences and double infinite matrices: Lacunary and hybrid constructions reduce bit or computational cost while nearly achieving the optimal rate, up to logarithmic terms (Löbbe, 2014, Löbbe, 2014).
  • Component-by-component and bracketing number methods: Recent breakthroughs in bracketing number estimates and interval covers have further improved algorithmically achievable constants, approaching the theoretical best (Weiß, 2024).

Yet, no known deterministic and polynomial-time construction achieves the O(dε2)O(d\varepsilon^{-2}) rate without extra logarithmic or polynomial factors.

5. Explicit Probabilistic/Structural Constructions

Product-structured sets and hybrid schemes have been proposed to narrow the existence–construction gap:

  • Multiset unions of (digitally shifted) Korobov polynomial lattice point sets: Both probabilistic and deterministic parameter selection yield star discrepancy,

D(P)CdlogNN,D^*(P) \leq C \frac{d \log N}{\sqrt{N}},

with N(ε,d)=O((dlog(d/ε)/ε)2)N^*(\varepsilon,d) = O\bigl((d \log (d/\varepsilon)/\varepsilon)^2\bigr) (Du et al., 23 Jan 2026, Dick et al., 19 Sep 2025).

  • Jittered sampling: Stratified (grid-based) random sampling achieves

N(ε,d)=O((d/ε2)d/(d+1)),N^*(\varepsilon,d) = O\bigl((d/\varepsilon^2)^{d/(d+1)}\bigr),

an asymptotic improvement in the exponent of ε\varepsilon for high dd and small ε\varepsilon relative to classical bounds (Pausinger et al., 2015).

These approaches fundamentally reduce the continuous search space for candidate pointsets to finite, structured families, though most still require probabilistic existence arguments or computational post-selection.

6. Practical Implications and Applications

Reducing N(d,ε)N^*(d, \varepsilon) is central to quasi-Monte Carlo integration, where small star discrepancy guarantees low worst-case integration error by the Koksma–Hlawka inequality. In practice, recent constant improvements directly translate into computational savings in high-dimensional applications (e.g., QMC for PDEs with random input, machine learning).

Algorithmic advances allow, for modest dd, either construction or stochastic identification of pointsets with nearly optimal N(d,ε)N^*(d, \varepsilon). However, NP-hardness of discrepancy computation remains a severe bottleneck for large dd, and practical implementations must often resort to surrogate or heuristic evaluations (Doerr et al., 2013, Aistleitner et al., 2012).

7. Open Directions and Conjectures

Key unresolved problems include:

  • Determining whether N(d,ε)N^*(d, \varepsilon) can be achieved with linear dependence in both dd and 1/ε1/\varepsilon, thus closing the exponent gap (2207.13471).
  • Providing an explicit, deterministic, and polynomial-time construction achieving N(d,ε)=O(dε2)N^*(d, \varepsilon) = O(d\varepsilon^{-2}) without logarithmic inflation (Dick et al., 2014, Weiß, 2024).
  • Extending bounds to weighted or non-axis-aligned discrepancy, and tailored constructions adapted to tractable function classes in applied QMC.
  • Further reduction of the subleading constants, especially in the leading regime for practical dimensions (d20d \lesssim 20).

The field continues to balance between fundamental probabilistic existence results and ongoing pursuit of explicit, computationally feasible constructions. The asymptotic behavior N(d,ε)dεγN^*(d,\varepsilon) \sim d\,\varepsilon^{-\gamma} with 1<γ<21 < \gamma < 2 remains a principal focus of theoretical and applied discrepancy research.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Inverse of the Star Discrepancy.