O(n)-sample randomized QMC achieving improvement beyond Hardy–Krause
Develop a randomized quasi-Monte Carlo algorithm that achieves the same improved integration error guarantee beyond the Koksma–Hlawka inequality as established in this paper (i.e., an error bound governed by the smoothed-out variation σ_SO(f) rather than the Hardy–Krause variation), while using only O_d(n) random samples to generate an n-point QMC set and running in O_d(n) time, instead of starting from n^2 samples as in the SubgTransference method.
Sponsor
References
It would be nice to obtain a more direct algorithm that achieves our improvement over Hardy-Krause variation and Koksma-Hlawka inequalities, but only needs \widetilde{O}_d(n) random samples to generate a $n$-point QMC set, and runs in \widetilde{O}_d(n) time. We leave the question of obtaining such an algorithm that uses fewer samples as an interesting open problem.