Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Multiple Defaults for Machine Learning Algorithms (1811.09409v3)

Published 23 Nov 2018 in stat.ML and cs.LG

Abstract: The performance of modern machine learning methods highly depends on their hyperparameter configurations. One simple way of selecting a configuration is to use default settings, often proposed along with the publication and implementation of a new algorithm. Those default values are usually chosen in an ad-hoc manner to work good enough on a wide variety of datasets. To address this problem, different automatic hyperparameter configuration algorithms have been proposed, which select an optimal configuration per dataset. This principled approach usually improves performance but adds additional algorithmic complexity and computational costs to the training procedure. As an alternative to this, we propose learning a set of complementary default values from a large database of prior empirical results. Selecting an appropriate configuration on a new dataset then requires only a simple, efficient and embarrassingly parallel search over this set. We demonstrate the effectiveness and efficiency of the approach we propose in comparison to random search and Bayesian Optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Florian Pfisterer (23 papers)
  2. Jan N. van Rijn (23 papers)
  3. Philipp Probst (8 papers)
  4. Andreas Müller (43 papers)
  5. Bernd Bischl (136 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.