Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AtomNAS: Fine-Grained End-to-End Neural Architecture Search (1912.09640v2)

Published 20 Dec 2019 in cs.CV

Abstract: Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This search space allows a mix of operations by composing different types of atomic blocks, while the search space in previous methods only allows homogeneous operations. Based on this search space, we propose a resource-aware architecture search framework which automatically assigns the computational resources (e.g., output channel numbers) for each operation by jointly considering the performance and the computational cost. In addition, to accelerate the search process, we propose a dynamic network shrinkage technique which prunes the atomic blocks with negligible influence on outputs on the fly. Instead of a search-and-retrain two-stage paradigm, our method simultaneously searches and trains the target architecture. Our method achieves state-of-the-art performance under several FLOPs configurations on ImageNet with a small searching cost. We open our entire codebase at: https://github.com/meijieru/AtomNAS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jieru Mei (26 papers)
  2. Yingwei Li (31 papers)
  3. Xiaochen Lian (11 papers)
  4. Xiaojie Jin (50 papers)
  5. Linjie Yang (48 papers)
  6. Alan Yuille (294 papers)
  7. Jianchao Yang (48 papers)
Citations (106)

Summary

AtomNAS: Fine-Grained End-to-End Neural Architecture Search

Neural Architecture Search (NAS) has become a pivotal approach in designing efficient neural network architectures, often outperforming those manually crafted by experts. The paper "AtomNAS: Fine-Grained End-to-End Neural Architecture Search" introduces a novel approach that addresses the limitations of existing NAS methodologies, particularly focusing on the granularity of the search space and the efficiency of the search process.

The authors propose a fine-grained search space built upon atomic blocks, which are minimal search units that allow for a more diverse and combinatorial network structure. Traditional NAS methods typically operate with larger and more homogeneous blocks, limiting the variety of neural architectures that can be explored. By decomposing the neural network architecture into finer atomic blocks, AtomNAS encapsulates a richer and exponentially larger search space. The reformulation allows for a mix of operations, such as convolutions with varying kernel sizes, facilitating the search for optimal architectures under different computational constraints.

In conjunction with the refined search space, AtomNAS proposes a resource-aware architecture search framework. This framework is notable for its ability to automatically allocate computational resources, like output channel numbers, by jointly considering the network's performance and computational cost. It employs a novel dynamic network shrinkage technique that prunes atomic blocks with negligible impact on outputs, thereby accelerating the search process. This dynamic shrinkage phase marks a divergence from the traditional search-and-retrain paradigm by performing simultaneous searching and training.

One of the most compelling features of AtomNAS is its empirical performance. The method achieves state-of-the-art accuracy on the ImageNet dataset across various FLOPs configurations while maintaining reduced search costs. The results show 75.9% top-1 accuracy at approximately 360M FLOPs, which is 0.9% higher than the previous best model. Post-integration of additional modules like the Swish activation function and Squeeze-and-Excitation (SE) modules, AtomNAS pushes the accuracy further to 77.6%, illustrating its adaptability and robustness.

The implications of AtomNAS are significant for both theoretical and practical advancements in NAS. The fine-grained search space and dynamic network shrinkage can inform future work in NAS by pushing the boundaries on efficiency and effectiveness of automatic architecture generation. Moreover, the resource-aware regularization demonstrates a successful approach to balancing accuracy with resource usage, which is critical in applications ranging from mobile devices to large-scale data centers.

In terms of future developments, the integration of emerging hardware architectures and further exploration of multi-objective optimization techniques could propel AtomNAS or similar frameworks into broader usage, potentially redefining the standards of NAS research.

In conclusion, AtomNAS presents a sophisticated methodology that advances the scope and performance of NAS. It embodies a comprehensive approach that leverages fine-grained search spaces and resource-aware regularizations, setting a new benchmark in automated neural network design. This paper creates avenues for future research to explore novel dimensions in NAS, potentially extending beyond current applications to influence broader AI model design paradigms.

Github Logo Streamline Icon: https://streamlinehq.com