Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating the Search Phase of Neural Architecture Search (1902.08142v3)

Published 21 Feb 2019 in cs.LG and stat.ML
Evaluating the Search Phase of Neural Architecture Search

Abstract: Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.

An Analysis of the Effectiveness of Neural Architecture Search Strategies

The paper "Evaluating The Search Phase of Neural Architecture Search" provides a comprehensive evaluation of the search strategies employed by Neural Architecture Search (NAS) algorithms, an area garnering significant attention due to its potential to automate neural network design for a variety of tasks. Traditional evaluations of NAS algorithms center on their final performance on specific tasks, typically disregarding the effectiveness of the search phase itself. The authors challenge this status quo by systematically comparing state-of-the-art NAS algorithms against random search strategies and dissecting how components like weight sharing affect the search process.

Core Analysis and Observations

Neural Architecture Search involves two key phases: searching for an optimal architecture and validating it through training. State-of-the-art NAS algorithms, such as DARTS, NAO, and ENAS, leverage various search strategies ranging from gradient descent to reinforcement learning. However, the paper presents the following critical observations:

  1. Comparable Performance with Random Search: The experimental results suggest that the state-of-the-art NAS algorithms do not significantly outperform random sampling policies in finding optimal architectures, particularly within constrained search spaces. This calls into question the added value of complex NAS strategies over simpler stochastic policies.
  2. Impact of Weight Sharing: A pivotal finding of this paper is the detrimental effect of weight sharing on the search phase's reliability. Weight sharing, intended to reduce resource demand, negatively impacts the fidelity of architecture rankings during the search phase. The paper provides evidence showing that rankings produced by weight-sharing strategies are poorly correlated with their true performance when architectures are evaluated independently.
  3. Role of Search Space Constraints: The authors posit that the constrained nature of many NAS search spaces contributes to the similarity in performance between NAS methods and random search. In such spaces, even randomly selected architectures can achieve competitive performance.

Numerical and Experimental Insights

The paper's numerical experiments on standard benchmark datasets like Penn Tree Bank and CIFAR-10 reveal that differences in final architecture performance often fall within small margins. For instance, in evaluating RNN architectures, randomly sampled architectures perform comparably, if not better than, those identified by DARTS and NAO. Moreover, Welch's t-tests demonstrate that the NAS approaches lack statistical significance in outshining random policies across multiple runs.

Furthermore, the paper highlights ranking disorder introduced by weight sharing using Kendall Tau metrics, asserting that the chaos introduced by weight sharing surpasses the intended advantages, thus misleading the search process.

Practical and Theoretical Implications

This research holds substantial implications for both practical NAS deployment and theoretical development in NAS methodology:

  • For Practitioners: The findings underscore the need for a critical reassessment of current NAS strategies, particularly in the use of weight sharing. Developers may need to balance computational efficiency with search effectiveness to achieve better results.
  • For NAS Researchers: The insights encourage deeper exploration into alternative search strategies that can either enhance or replace existing methods. The reevaluation of search space design and relaxation of constraints can potentially unlock more effective architectures.

Future Directions

The evaluative framework proposed could guide future research to refine NAS processes. Future work might focus on designing new architecture families or search strategies that improve upon the limitations exposed by this paper. Additionally, exploring hybrid approaches that combine random search's robustness with refined NAS techniques may offer promising avenues.

In concluding, this paper effectively argues for a shift in focus towards more comprehensive evaluations of the search phase in NAS, urging the community to innovate beyond conventional paradigms that have dominated the field thus far.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kaicheng Yu (39 papers)
  2. Christian Sciuto (1 paper)
  3. Martin Jaggi (155 papers)
  4. Claudiu Musat (38 papers)
  5. Mathieu Salzmann (185 papers)
Citations (335)