On Latency Predictors for Neural Architecture Search (2403.02446v1)
Abstract: Efficient deployment of neural networks (NN) requires the co-optimization of accuracy and latency. For example, hardware-aware neural architecture search has been used to automatically find NN architectures that satisfy a latency constraint on a specific hardware device. Central to these search algorithms is a prediction model that is designed to provide a hardware latency estimate for a candidate NN architecture. Recent research has shown that the sample efficiency of these predictive models can be greatly improved through pre-training on some \textit{training} devices with many samples, and then transferring the predictor on the \textit{test} (target) device. Transfer learning and meta-learning methods have been used for this, but often exhibit significant performance variability. Additionally, the evaluation of existing latency predictors has been largely done on hand-crafted training/test device sets, making it difficult to ascertain design features that compose a robust and general latency predictor. To address these issues, we introduce a comprehensive suite of latency prediction tasks obtained in a principled way through automated partitioning of hardware device sets. We then design a general latency predictor to comprehensively study (1) the predictor architecture, (2) NN sample selection methods, (3) hardware device representations, and (4) NN operation encoding schemes. Building on conclusions from our study, we present an end-to-end latency predictor training strategy that outperforms existing methods on 11 out of 12 difficult latency prediction tasks, improving latency prediction by 22.5\% on average, and up to to 87.6\% on the hardest tasks. Focusing on latency prediction, our HW-Aware NAS reports a $5.8\times$ speedup in wall-clock time. Our code is available on \href{https://github.com/abdelfattah-lab/nasflat_latency}{https://github.com/abdelfattah-lab/nasflat\_latency}.
- Zero-cost proxies for lightweight nas. arXiv preprint arXiv:2101.08134, 2021.
- Multi-predict: Few shot predictors for efficient neural architecture search, 2023.
- A comprehensive survey on hardware-aware neural architecture search, 2021.
- Proxylessnas: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations.
- Once for all: Train one network and specialize it for efficient deployment. In International Conference on Learning Representations, 2020. URL https://arxiv.org/pdf/1908.09791.pdf.
- Nas-bench-201: Extending the scope of reproducible neural architecture search. In International Conference on Learning Representations (ICLR), 2020. URL https://openreview.net/forum?id=HJxyZkBKDr.
- Transnas-bench-101: Improving transferability and generalizability of cross-task neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5251–5260, 2021.
- Brp-nas: Prediction-based nas using gcns. volume 33, pp. 10480–10490, 2020.
- Elsken, T. Bosch gmbh. private communication., September 2023.
- An efficient heuristic procedure for partitioning graphs. The Bell System Technical Journal, 49(2):291–307, 1970. doi: 10.1002/j.1538-7305.1970.tb01770.x.
- Nas-bench-suite-zero: Accelerating research on zero cost proxies. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
- Cosine similarity to determine similarity measure: Study case in online essay assessment. In 2016 4th International Conference on Cyber and IT Service Management, pp. 1–6, 2016. doi: 10.1109/CITSM.2016.7577578.
- Rapid neural architecture search by learning to generate graphs from datasets. In International Conference on Learning Representations, 2021a.
- Help: Hardware-adaptive efficient latency prediction for nas via meta-learning. In 35th Conference on Neural Information Processing Systems (NeurIPS) 2021. Conference on Neural Information Processing Systems (NeurIPS), 2021b.
- {HW}-{nas}-bench: Hardware-aware neural architecture search benchmark. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=_0kaDkv3dVf.
- Random search and reproducibility for neural architecture search. arXiv preprint arXiv:1902.07638, 2019.
- Best practices for scientific research on neural architecture search. arXiv preprint arXiv:1909.02453, 2019.
- MacQueen, J. et al. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pp. 281–297. Oakland, CA, USA, 1967.
- Simple and deep graph convolutional networks. 2020.
- Maple-edge: A runtime latency predictor for edge devices. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3659–3667. IEEE, 2022.
- TA-GATES: An encoding scheme for neural network architectures. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=74fJwNrBlPI.
- A generic graph-based neural architecture encoding scheme with multifaceted information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(7):7955–7969, 2023. doi: 10.1109/TPAMI.2022.3228604.
- Mnasnet: Platform-aware neural architecture search for mobile. pp. 2820–2828, 2019.
- Graph attention networks. In International Conference on Learning Representations, 2018.
- Hat: Hardware-aware transformers for efficient natural language processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020.
- A study on encodings for neural architecture search. In Advances in Neural Information Processing Systems, 2020.
- Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10734–10742, 2019.
- Latency-aware differentiable neural architecture search. arXiv preprint arXiv:2001.06392, 2020.
- Does unsupervised architecture representation learning help neural architecture search? In NeurIPS, 2020.
- Cate: Computation-aware neural architecture encoding with transformers. In ICML, 2021.
- Nas evaluation is frustratingly hard. 2020.
- Nas-bench-101: Towards reproducible neural architecture search. In International Conference on Machine Learning, pp. 7105–7114. PMLR, 2019.
- Bignas: Scaling up neural architecture search with big single-stage models. In European Conference on Computer Vision, pp. 702–717. Springer, 2020.
- Surrogate nas benchmarks: Going beyond the limited search spaces of tabular nas benchmarks, 2020. URL https://arxiv.org/abs/2008.09777.
- Yash Akhauri (20 papers)
- Mohamed S. Abdelfattah (38 papers)