Average Squared Discrepancy Analysis
- Average squared discrepancy is a symmetric measure that averages the L2 star discrepancy over all cube vertices to quantify uniformity in high-dimensional point sets.
- It overcomes the limitations of classical discrepancy measures by avoiding origin bias and mitigating known pathologies in uniform sampling.
- Its computational efficiency (O(dn^2)) and differentiability facilitate gradient-based optimization in quasi–Monte Carlo integration and randomized algorithms.
Average squared discrepancy is a symmetrized, smooth, and computationally tractable measure for quantifying the uniformity of finite point sets in high-dimensional unit cubes. It is specifically designed to overcome structural and computational limitations of classic discrepancy criteria, such as the (star) and discrepancy, by averaging over all possible cube vertex anchorings. This approach yields a criterion that remains robust against certain pathologies and facilitates optimization-based point set construction in quasi-Monte Carlo integration, randomized algorithms, and uniform sampling applications (Clément et al., 6 Aug 2025).
1. Definition and Formulation
Average squared discrepancy, denoted here as , is defined for a point set by averaging the star discrepancy over all cube vertices. For each subset , one constructs a partially reflected version , where each coordinate is either left unchanged () or reflected to (). The star discrepancy, , is computed for the point cloud anchored at the origin; the criterion then averages across all :
This can be computed via a closed-form Warnock-type formula:
This formulation maintains the computational complexity of the classical star discrepancy, despite the apparent exponential number of anchorings.
2. Motivation and Pathology Avoidance
The star discrepancy, which measures the maximal deviation from uniformity over all axis-aligned anchored boxes, is not differentiable and computationally expensive—requiring exploration of boxes. The classical star discrepancy, anchored only at the origin, is differentiable and computable in time, but is susceptible to severe asymmetry and unintuitive pathologies described by Matoušek—most notably, the "Pathology II," whereby point sets may exhibit anomalously low discrepancy values if concentrated at a vertex, leading to misleading uniformity assessments unless is exponentially large in .
Average squared discrepancy resolves this origin-bias by symmetrizing across all cube vertices, ensuring that pathologies arising from privilege to special lattice positions are suppressed. By construction, no particular corner is favored, and thus the criterion robustly penalizes non-uniform configurations regardless of anchor.
3. Relationship to Weighted Symmetric Discrepancy
The measure is equivalent (up to a multiplicative constant) to Hickernell's weighted symmetric discrepancy. This equivalency is established by expressing as a weighted norm of the local discrepancy function, integrating over all possible box anchorings and weights associated with their relative volumes. This confirms the validity of the average squared criterion as a theoretically sound symmetric generalization.
4. Comparison with Classical Discrepancy Measures
| Measure | Symmetry | Complexity | Differentiability | Pathology Robustness |
|---|---|---|---|---|
| star discrepancy | Origin | No | Poor | |
| star discrepancy | Origin | Yes | Poor | |
| Average squared discrepancy | All vertices | Yes | Excellent | |
| Weighted symmetric | All vertices | Yes | Excellent |
The average squared discrepancy stands unique in combining computational efficiency, differentiability, and full symmetry without sacrificing analytic tractability.
5. Numerical Optimization and Performance
Extensive numerical experiments in dimension two (Clément et al., 6 Aug 2025) demonstrate the practical implications of optimizing point sets for average squared discrepancy versus classical measures:
- Optimized sets for outperform Sobol’ points by 10–40% in terms of discrepancy magnitude.
- Optimization for this symmetric discrepancy produces point sets with strong star discrepancy characteristics as well, whereas sets optimized solely for the star discrepancy may fail to generalize and exhibit poor performance under other criteria.
- The differentiable structure of is amenable to gradient-based optimization algorithms such as Message-Passing Monte Carlo (MPMC).
These results indicate that the averaging strategy yields point sets that are universally well-distributed, avoiding the narrow specialization typical of classical optimization approaches.
6. Practical Applications and Theoretical Implications
- Quasi–Monte Carlo integration: Point sets of low average squared discrepancy offer improved worst-case guarantees for integration error.
- Randomized algorithms: Uniform coverage and reduced variance in sampling schemes.
- Uniform grid design: Robustness against alignment artifacts, useful for randomized load balancing and distributed systems.
- Discrepancy theory: Provides a canonical smoothing and symmetrization, addressing longstanding concerns about the geometric structure and sensitivity of classical star discrepancy measures.
7. Conclusion
Average squared discrepancy provides a fundamental advance in discrepancy measurement, synthesizing symmetry, differentiability, and computational tractability. As established in recent research (Clément et al., 6 Aug 2025), it not only resolves important analytic pathologies but also leads to high-quality, optimally distributed point sets for a range of scientific and engineering applications. Its equivalence to weighted symmetric criteria strengthens its theoretical foundation, making it the preferred criterion in contexts requiring rigorous discrepancy minimization and robust uniformity analysis.