Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PASHA: Efficient HPO and NAS with Progressive Resource Allocation (2207.06940v2)

Published 14 Jul 2022 in cs.LG and stat.ML

Abstract: Hyperparameter optimization (HPO) and neural architecture search (NAS) are methods of choice to obtain the best-in-class machine learning models, but in practice they can be costly to run. When models are trained on large datasets, tuning them with HPO or NAS rapidly becomes prohibitively expensive for practitioners, even when efficient multi-fidelity methods are employed. We propose an approach to tackle the challenge of tuning machine learning models trained on large datasets with limited computational resources. Our approach, named PASHA, extends ASHA and is able to dynamically allocate maximum resources for the tuning procedure depending on the need. The experimental comparison shows that PASHA identifies well-performing hyperparameter configurations and architectures while consuming significantly fewer computational resources than ASHA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ondrej Bohdal (19 papers)
  2. Lukas Balles (17 papers)
  3. Martin Wistuba (30 papers)
  4. Beyza Ermis (31 papers)
  5. Giovanni Zappella (28 papers)
  6. Cédric Archambeau (18 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.