Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Genealogical Population-Based Training for Hyperparameter Optimization (2109.14925v2)

Published 30 Sep 2021 in cs.LG

Abstract: HyperParameter Optimization (HPO) aims at finding the best HyperParameters (HPs) of learning models, such as neural networks, in the fastest and most efficient way possible. Most recent HPO algorithms try to optimize HPs regardless of the model that obtained them, assuming that for different models, same HPs will produce very similar results. We break free from this paradigm and propose a new take on preexisting methods that we called Genealogical Population Based Training (GPBT). GPBT, via the shared histories of "genealogically"-related models, exploit the coupling of HPs and models in an efficient way. We experimentally demonstrate that our method cuts down by 2 to 3 times the computational cost required, generally allows a 1% accuracy improvement on computer vision tasks, and reduces the variance of the results by an order of magnitude, compared to the current algorithms. Our method is search-algorithm agnostic so that the inner search routine can be any search algorithm like TPE, GP, CMA or random search.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Antoine Scardigli (3 papers)
  2. Paul Fournier (5 papers)
  3. Matteo Vilucchio (6 papers)
  4. David Naccache (42 papers)

Summary

We haven't generated a summary for this paper yet.