Dice Question Streamline Icon: https://streamlinehq.com

Theoretical guarantees for quasi-random search in HPO

Establish theoretical results or guarantees that support the effectiveness of quasi-random search strategies based on low-discrepancy sequences (e.g., Latin hypercube sampling, Sobol', Halton, Hammersley) for hyperparameter optimization, clarifying when and why they outperform uniform random search.

Information Square Streamline Icon: https://streamlinehq.com

Background

In the survey of non-adaptive search methods, quasi-random strategies are motivated by their more even coverage of the search space compared to uniform random sampling, potentially offering advantages in HPO, especially when effective dimensionality is low.

Although empirical reports suggest consistent improvements over random search on certain deep learning tasks, the authors note the absence of formal theoretical results validating these advantages, highlighting a need for rigorous analysis to underpin practical adoption.

References

Quasi-random search may offers benefits over random search, although to date we are not aware of any theoretical result supporting this.

Hyperparameter Optimization in Machine Learning (2410.22854 - Franceschi et al., 30 Oct 2024) in Chapter 3, Section 3.3 (Quasi-random search)