O(n)-sample randomized QMC achieving improvement beyond Hardy–Krause

Develop a randomized quasi-Monte Carlo algorithm that achieves the same improved integration error guarantee beyond the Koksma–Hlawka inequality as established in this paper (i.e., an error bound governed by the smoothed-out variation σ_SO(f) rather than the Hardy–Krause variation), while using only O_d(n) random samples to generate an n-point QMC set and running in O_d(n) time, instead of starting from n^2 samples as in the SubgTransference method.

Background

The paper’s SubgTransference algorithm produces n-point QMC sets that achieve error bounds based on the smoothed-out variation σ_SO(f), substantially improving over the classical Koksma–Hlawka inequality that uses Hardy–Krause variation. However, this method begins with n2 uniformly random samples and recursively partitions them to obtain the final n-point sets.

The authors note that for constructing low-discrepancy point sets (without the improved error guarantees), there exist algorithms that use only O(n) random samples (e.g., Dwivedi et al., 2019). In the present setting, reducing the sample count to O(n) while maintaining the improved σ_SO-based error is challenging because the initial-sample error term err(A_0,f) scales as Ω(σ(f)/√n) when only O(n) samples are used, matching the Monte Carlo error. The open problem is to design a more direct algorithm that overcomes this bottleneck and attains the improved bound using only O(n) samples and O(n) time.

References

It would be nice to obtain a more direct algorithm that achieves our improvement over Hardy-Krause variation and Koksma-Hlawka inequalities, but only needs \widetilde{O}_d(n) random samples to generate a $n$-point QMC set, and runs in \widetilde{O}_d(n) time. We leave the question of obtaining such an algorithm that uses fewer samples as an interesting open problem.

Quasi-Monte Carlo Beyond Hardy-Krause (2408.06475 - Bansal et al., 12 Aug 2024) in Section 6: Concluding Remarks