Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic Termination for Hyperparameter Optimization (2104.08166v4)

Published 16 Apr 2021 in cs.LG, cs.AI, and stat.ML

Abstract: Bayesian optimization (BO) is a widely popular approach for the hyperparameter optimization (HPO) in machine learning. At its core, BO iteratively evaluates promising configurations until a user-defined budget, such as wall-clock time or number of iterations, is exhausted. While the final performance after tuning heavily depends on the provided budget, it is hard to pre-specify an optimal value in advance. In this work, we propose an effective and intuitive termination criterion for BO that automatically stops the procedure if it is sufficiently close to the global optimum. Our key insight is that the discrepancy between the true objective (predictive performance on test data) and the computable target (validation performance) suggests stopping once the suboptimality in optimizing the target is dominated by the statistical estimation error. Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time. Additionally, we find that overfitting may occur in the context of HPO, which is arguably an overlooked problem in the literature, and show how our termination criterion helps to mitigate this phenomenon on both small and large datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Anastasia Makarova (7 papers)
  2. Huibin Shen (10 papers)
  3. Valerio Perrone (20 papers)
  4. Aaron Klein (24 papers)
  5. Jean Baptiste Faddoul (5 papers)
  6. Andreas Krause (269 papers)
  7. Matthias Seeger (22 papers)
  8. Cedric Archambeau (44 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.