Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DHA: End-to-End Joint Optimization of Data Augmentation Policy, Hyper-parameter and Architecture (2109.05765v2)

Published 13 Sep 2021 in cs.LG and cs.CV

Abstract: Automated machine learning (AutoML) usually involves several crucial components, such as Data Augmentation (DA) policy, Hyper-Parameter Optimization (HPO), and Neural Architecture Search (NAS). Although many strategies have been developed for automating these components in separation, joint optimization of these components remains challenging due to the largely increased search dimension and the variant input types of each component. In parallel to this, the common practice of searching for the optimal architecture first and then retraining it before deployment in NAS often suffers from low performance correlation between the searching and retraining stages. An end-to-end solution that integrates the AutoML components and returns a ready-to-use model at the end of the search is desirable. In view of these, we propose DHA, which achieves joint optimization of Data augmentation policy, Hyper-parameter and Architecture. Specifically, end-to-end NAS is achieved in a differentiable manner by optimizing a compressed lower-dimensional feature space, while DA policy and HPO are regarded as dynamic schedulers, which adapt themselves to the update of network parameters and network architecture at the same time. Experiments show that DHA achieves state-of-the-art (SOTA) results on various datasets and search spaces. To the best of our knowledge, we are the first to efficiently and jointly optimize DA policy, NAS, and HPO in an end-to-end manner without retraining.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kaichen Zhou (30 papers)
  2. Lanqing Hong (72 papers)
  3. Shoukang Hu (38 papers)
  4. Fengwei Zhou (21 papers)
  5. Binxin Ru (24 papers)
  6. Jiashi Feng (295 papers)
  7. Zhenguo Li (195 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.