Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Iterative Deepening Hyperband (2302.00511v2)

Published 1 Feb 2023 in cs.LG and cs.AI

Abstract: Hyperparameter optimization (HPO) is concerned with the automated search for the most appropriate hyperparameter configuration (HPC) of a parameterized machine learning algorithm. A state-of-the-art HPO method is Hyperband, which, however, has its own parameters that influence its performance. One of these parameters, the maximal budget, is especially problematic: If chosen too small, the budget needs to be increased in hindsight and, as Hyperband is not incremental by design, the entire algorithm must be re-run. This is not only costly but also comes with a loss of valuable knowledge already accumulated. In this paper, we propose incremental variants of Hyperband that eliminate these drawbacks, and show that these variants satisfy theoretical guarantees qualitatively similar to those for the original Hyperband with the "right" budget. Moreover, we demonstrate their practical utility in experiments with benchmark data sets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jasmin Brandt (4 papers)
  2. Marcel Wever (23 papers)
  3. Dimitrios Iliadis (4 papers)
  4. Viktor Bengs (23 papers)
  5. Eyke Hüllermeier (129 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.