Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Transferable Architectures for Scalable Image Recognition (1707.07012v4)

Published 21 Jul 2017 in cs.CV, cs.LG, and stat.ML

Abstract: Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (the "NASNet search space") which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Barret Zoph (38 papers)
  2. Vijay Vasudevan (24 papers)
  3. Jonathon Shlens (58 papers)
  4. Quoc V. Le (128 papers)
Citations (5,384)

Summary

  • The paper introduces a novel NASNet search space that automates the design of convolutional cells and transfers them from small to large datasets.
  • The paper reports state-of-the-art performance with a 2.4% error on CIFAR-10 and an 82.7% top-1 accuracy on ImageNet while reducing computational costs by 28%.
  • The paper demonstrates that scalable, repeatable cell architectures significantly improve efficiency and performance in image recognition tasks.

Learning Transferable Architectures for Scalable Image Recognition

This paper addresses the challenge of designing effective convolutional neural network (CNN) architectures for image recognition, which traditionally involves substantial architecture engineering. The authors propose a novel approach to automate this process by learning compact architectural motifs on a small dataset and transferring them to a larger dataset. This method leverages the Neural Architecture Search (NAS) framework, optimizing computational efficiency and training time.

Methodology

The core contribution is the introduction of the NASNet search space. Instead of searching for an entire network architecture, the NASNet search space focuses on identifying the most promising convolutional layer, termed a "cell." These cells are designed to be repeatable and scalable across datasets of varying sizes. Specifically, the paper differentiates between two types of convolutional cells: Normal Cells, which maintain the input dimensions, and Reduction Cells, which reduce the height and width of the feature map by a factor of two.

To enable the transfer of architectures from a small dataset to a larger one, the authors adopt a reinforcement learning-based search method. The controller, an RNN, generates candidate architectures by sampling various cell structures, which are then evaluated on the CIFAR-10 dataset. Promising architectures are transferred to the more computationally demanding ImageNet dataset, significantly reducing the initial search cost.

Experimental Results

The paper reports state-of-the-art performance on CIFAR-10 and ImageNet. Notably:

  • On CIFAR-10, the NASNet-A model, augmented with cutout data augmentation, achieves an error rate of 2.4%, surpassing prior approaches.
  • On ImageNet, NASNet-A models achieve top-1 accuracy of 82.7% and top-5 accuracy of 96.2%, setting a new benchmark at the time of publication. This model also reduces computational demand by 28% compared to the best human-designed architectures.

Additionally, the flexibility in NASNet's architecture allows it to be scaled according to computational resources, achieving superior performance across different budget constraints. For instance, a smaller NASNet variant achieves 74% top-1 accuracy on ImageNet, which is 3.1% better than equivalently-sized models aimed at mobile platforms.

Theoretical and Practical Implications

The primary theoretical implication is the validation of the NASNet search space, demonstrating that a well-constructed search space allows the efficient transfer of learned architectures across datasets of different scales. Practically, this methodology automates the tedious process of neural architecture design, making it more accessible to researchers and practitioners.

Speculations on Future Developments

In the broader context of AI, the success of NASNet implies potential extensions in various directions:

  • Further refinement of the NAS search methods, possibly integrating more sophisticated optimization techniques like evolutionary algorithms or advanced reinforcement learning strategies.
  • Application to other domains within computer vision, such as semantic segmentation, pose estimation, and video analysis.
  • Exploration of NAS in other modalities beyond images, including text and speech tasks, could generalize the utility of these methods across AI disciplines.

Overall, this work demonstrates the value of learning scalable architectures through NAS, providing a robust framework that balances computational efficiency and model performance. As AI systems continue to evolve, the methods and insights from this paper will likely influence future advancements in automated neural architecture design.

Youtube Logo Streamline Icon: https://streamlinehq.com