Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automated Machine Learning: From Principles to Practices (1810.13306v5)

Published 31 Oct 2018 in cs.AI, cs.LG, and stat.ML

Abstract: Machine learning (ML) methods have been developing rapidly, but configuring and selecting proper methods to achieve a desired performance is increasingly difficult and tedious. To address this challenge, automated machine learning (AutoML) has emerged, which aims to generate satisfactory ML configurations for given tasks in a data-driven way. In this paper, we provide a comprehensive survey on this topic. We begin with the formal definition of AutoML and then introduce its principles, including the bi-level learning objective, the learning strategy, and the theoretical interpretation. Then, we summarize the AutoML practices by setting up the taxonomy of existing works based on three main factors: the search space, the search algorithm, and the evaluation strategy. Each category is also explained with the representative methods. Then, we illustrate the principles and practices with exemplary applications from configuring ML pipeline, one-shot neural architecture search, and integration with foundation models. Finally, we highlight the emerging directions of AutoML and conclude the survey.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Quanming Yao (102 papers)
  2. Zhenqian Shen (2 papers)
  3. Yongqi Zhang (33 papers)
  4. Lanning Wei (16 papers)
  5. Huan Zhao (109 papers)
Citations (256)

Summary

Automated Machine Learning: From Principles to Practices

This paper undertakes a comprehensive exploration of automated machine learning (AutoML), examining its theoretical foundations, practical implementations, and emerging directions. As ML techniques continue to evolve, the complexity of configuring and optimizing models for specific tasks becomes increasingly pronounced, prompting a focus on AutoML as a solution. In essence, AutoML seeks to automate the design of ML pipelines, diminishing the reliance on human expertise and reducing the intricacies involved in machine learning configurations.

Conceptual Overview

The paper presents AutoML as a formalized process to automate learning configurations in a data-driven manner. AutoML addresses the complexities and time constraints associated with traditional ML configurations, which involve significant manual intervention and domain expertise. The discussion begins with defining the AutoML problem through a bi-level optimization framework. This approach separates the optimization of model parameters from the search for optimal learning configurations, emphasizing the dual nature of AutoML problems: the configuration space and the optimization of those configurations.

Key Elements of AutoML

The paper identifies three core components central to AutoML: search space, search algorithm, and evaluation strategy.

  1. Search Space: The search space forms the possible set of configurations that AutoML can explore. It can involve general spaces, structured designs like cell-based architectures, or transformations that make the search more tractable, such as softmax relaxation or sparse coding.
  2. Search Algorithms: Handling the optimization problem effectively is imperative. Techniques range from traditional approaches such as random and grid search, to more sophisticated methods like Bayesian optimization, gradient-based techniques, evolutionary algorithms, and reinforcement learning. These methods offer diverse trade-offs between efficiency and robustness in finding optimal configurations.
  3. Evaluation Strategy: Efficient and accurate model evaluation is critical. Strategies might involve learning curve monitoring, parameter reuse, or performance prediction through surrogate models. These methods are crucial to mitigate the often prohibitive computational cost of exhaustive model training and evaluation.

Implications and Applications

The implications of AutoML are vast, extending ML capabilities to non-expert users while accelerating model development cycles. The paper discusses applications ranging from configuring ML pipelines, optimizing neural architectures through one-shot methods, and extending to the nascent domain of foundation models. The evolving landscape of foundation models, such as LLMs, presents new challenges and opportunities for AutoML, emphasizing automatic pre-training, fine-tuning, and inference optimization.

Emerging Directions

The paper concludes with a discussion on emerging directions spanning problem setups, technical advancements, theoretical insights, and practical applications. It points towards increasing AutoML’s adaptability to novel learning problems like few-shot and transfer learning, advancing techniques for search efficiency, and understanding theoretical aspects like convergence and generalization. Furthermore, the paper highlights the utility of AutoML in diverse scientific domains - from biomedical research to edge computing - underscoring its transformative potential across industries.

In sum, this paper explores the multifaceted nature of AutoML, providing a thorough investigation into its theoretical underpinnings, operational mechanisms, and future trajectories. As AutoML technologies mature, their integration into the broader ML ecosystem promises to yield more accessible and efficient machine learning systems, potentially redefining the landscape of automated model development.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets