Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Average Squared Discrepancy Analysis

Updated 12 October 2025
  • Average squared discrepancy is a symmetric measure that averages the L2 star discrepancy over all cube vertices to quantify uniformity in high-dimensional point sets.
  • It overcomes the limitations of classical discrepancy measures by avoiding origin bias and mitigating known pathologies in uniform sampling.
  • Its computational efficiency (O(dn^2)) and differentiability facilitate gradient-based optimization in quasi–Monte Carlo integration and randomized algorithms.

Average squared discrepancy is a symmetrized, smooth, and computationally tractable measure for quantifying the uniformity of finite point sets in high-dimensional unit cubes. It is specifically designed to overcome structural and computational limitations of classic discrepancy criteria, such as the LL_\infty (star) and L2L_2 discrepancy, by averaging over all 2d2^d possible cube vertex anchorings. This approach yields a criterion that remains robust against certain pathologies and facilitates optimization-based point set construction in quasi-Monte Carlo integration, randomized algorithms, and uniform sampling applications (Clément et al., 6 Aug 2025).

1. Definition and Formulation

Average squared discrepancy, denoted here as (D2asd)2(D_2^{\mathrm{asd}})^2, is defined for a point set {x1,,xn}[0,1]d\{x_1,\dots,x_n\} \subset [0,1]^d by averaging the L2L_2 star discrepancy over all 2d2^d cube vertices. For each subset u{1,2,,d}u\subseteq\{1,2,\dots,d\}, one constructs a partially reflected version xiux_i^u, where each coordinate is either left unchanged (juj\in u) or reflected to 1xij1-x_{ij} (juj\notin u). The L2L_2 star discrepancy, D2D_2^*, is computed for the point cloud anchored at the origin; the criterion then averages D2D_2^* across all x1u,,xnux_1^u,\dots,x_n^u:

(D2asd)2=12du{1,,d}[D2(x1u,,xnu)]2(D_2^{\mathrm{asd}})^2 = \frac{1}{2^d}\sum_{u\subseteq\{1,\ldots,d\}} \left[ D_2^*\left(x_1^u,\dots,x_n^u\right) \right]^2

This can be computed via a closed-form Warnock-type formula:

(D2asd)2=13d2ni=1nj=1d1+2xij2xij24+1n2i,i=1nj=1d1xijxij2(D_2^{\mathrm{asd}})^2 = \frac{1}{3^d} - \frac{2}{n} \sum_{i=1}^n \prod_{j=1}^d \frac{1+2x_{ij}-2x_{ij}^2}{4} + \frac{1}{n^2} \sum_{i,i'=1}^n \prod_{j=1}^d \frac{1 - |x_{ij} - x_{i'j}|}{2}

This formulation maintains the O(dn2)O(dn^2) computational complexity of the classical L2L_2 star discrepancy, despite the apparent exponential number of anchorings.

2. Motivation and Pathology Avoidance

The LL_\infty star discrepancy, which measures the maximal deviation from uniformity over all axis-aligned anchored boxes, is not differentiable and computationally expensive—requiring exploration of O(nd)O(n^d) boxes. The classical L2L_2 star discrepancy, anchored only at the origin, is differentiable and computable in O(dn2)O(d n^2) time, but is susceptible to severe asymmetry and unintuitive pathologies described by Matoušek—most notably, the "Pathology II," whereby point sets may exhibit anomalously low discrepancy values if concentrated at a vertex, leading to misleading uniformity assessments unless nn is exponentially large in dd.

Average squared discrepancy resolves this origin-bias by symmetrizing across all cube vertices, ensuring that pathologies arising from privilege to special lattice positions are suppressed. By construction, no particular corner is favored, and thus the criterion robustly penalizes non-uniform configurations regardless of anchor.

3. Relationship to Weighted Symmetric L2L_2 Discrepancy

The measure is equivalent (up to a multiplicative constant) to Hickernell's weighted symmetric L2L_2 discrepancy. This equivalency is established by expressing (D2asd)2(D_2^{\mathrm{asd}})^2 as a weighted L2L_2 norm of the local discrepancy function, integrating over all possible box anchorings and weights associated with their relative volumes. This confirms the validity of the average squared criterion as a theoretically sound symmetric generalization.

4. Comparison with Classical Discrepancy Measures

Measure Symmetry Complexity Differentiability Pathology Robustness
LL_\infty star discrepancy Origin O(nd)O(n^d) No Poor
L2L_2 star discrepancy Origin O(dn2)O(dn^2) Yes Poor
Average squared discrepancy All vertices O(dn2)O(dn^2) Yes Excellent
Weighted symmetric L2L_2 All vertices O(dn2)O(dn^2) Yes Excellent

The average squared discrepancy stands unique in combining computational efficiency, differentiability, and full symmetry without sacrificing analytic tractability.

5. Numerical Optimization and Performance

Extensive numerical experiments in dimension two (Clément et al., 6 Aug 2025) demonstrate the practical implications of optimizing point sets for average squared discrepancy versus classical measures:

  • Optimized sets for (D2asd)2(D_2^{\mathrm{asd}})^2 outperform Sobol’ points by 10–40% in terms of discrepancy magnitude.
  • Optimization for this symmetric discrepancy produces point sets with strong L2L_2 star discrepancy characteristics as well, whereas sets optimized solely for the L2L_2 star discrepancy may fail to generalize and exhibit poor performance under other criteria.
  • The differentiable structure of (D2asd)2(D_2^{\mathrm{asd}})^2 is amenable to gradient-based optimization algorithms such as Message-Passing Monte Carlo (MPMC).

These results indicate that the averaging strategy yields point sets that are universally well-distributed, avoiding the narrow specialization typical of classical optimization approaches.

6. Practical Applications and Theoretical Implications

  • Quasi–Monte Carlo integration: Point sets of low average squared discrepancy offer improved worst-case guarantees for integration error.
  • Randomized algorithms: Uniform coverage and reduced variance in sampling schemes.
  • Uniform grid design: Robustness against alignment artifacts, useful for randomized load balancing and distributed systems.
  • Discrepancy theory: Provides a canonical smoothing and symmetrization, addressing longstanding concerns about the geometric structure and sensitivity of classical star discrepancy measures.

7. Conclusion

Average squared discrepancy provides a fundamental advance in discrepancy measurement, synthesizing symmetry, differentiability, and computational tractability. As established in recent research (Clément et al., 6 Aug 2025), it not only resolves important analytic pathologies but also leads to high-quality, optimally distributed point sets for a range of scientific and engineering applications. Its equivalence to weighted symmetric L2L_2 criteria strengthens its theoretical foundation, making it the preferred criterion in contexts requiring rigorous discrepancy minimization and robust uniformity analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Average Squared Discrepancy.