Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Carbon-Efficient Neural Architecture Search (2307.04131v1)

Published 9 Jul 2023 in cs.LG and cs.AI

Abstract: This work presents a novel approach to neural architecture search (NAS) that aims to reduce energy costs and increase carbon efficiency during the model design process. The proposed framework, called carbon-efficient NAS (CE-NAS), consists of NAS evaluation algorithms with different energy requirements, a multi-objective optimizer, and a heuristic GPU allocation strategy. CE-NAS dynamically balances energy-efficient sampling and energy-consuming evaluation tasks based on current carbon emissions. Using a recent NAS benchmark dataset and two carbon traces, our trace-driven simulations demonstrate that CE-NAS achieves better carbon and search efficiency than the three baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Finite-time analysis of the multiarmed bandit problem. Machine learning 47, 2 (2002), 235–256.
  2. Enabling sustainable clouds: The case for virtualizing the energy system. In Proceedings of the ACM Symposium on Cloud Computing. 350–358.
  3. Understanding and Simplifying One-Shot Architecture Search. In Proceedings of the 35th International Conference on Machine Learning.
  4. Once for All: Train One Network and Specialize it for Efficient Deployment. In International Conference on Learning Representations. https://arxiv.org/pdf/1908.09791.pdf
  5. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. In International Conference on Learning Representations. https://arxiv.org/pdf/1812.00332.pdf
  6. Progressive DARTS: Bridging the Optimization Gap for NAS in the Wild. CoRR abs/1912.10952 (2019). arXiv:1912.10952 http://arxiv.org/abs/1912.10952
  7. Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization. arXiv preprint arXiv:2006.05078 (2020).
  8. Parallel bayesian optimization of multiple noisy objectives with expected hypervolume improvement. Advances in Neural Information Processing Systems 34 (2021), 2187–2200.
  9. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Twenty-fourth international joint conference on artificial intelligence.
  10. Xuanyi Dong and Yi Yang. 2020. NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search. In International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=HJxyZkBKDr
  11. NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection. CoRR abs/1904.07392 (2019). arXiv:1904.07392 http://arxiv.org/abs/1904.07392
  12. Fast bayesian optimization of machine learning hyperparameters on large datasets. In Artificial intelligence and statistics. PMLR, 528–536.
  13. {HW}-{NAS}-Bench: Hardware-Aware Neural Architecture Search Benchmark. In International Conference on Learning Representations. https://openreview.net/forum?id=_0kaDkv3dVf
  14. Progressive Neural Architecture Search. In European Conference on Computer Vision(ECCV).
  15. DARTS: Differentiable Architecture Search. In International Conference on Learning Representations(ICLR).
  16. CarbonCast: Multi-Day Forecasting of Grid Carbon Intensity. In Proceedings of the 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (Boston, Massachusetts) (BuildSys ’22). Association for Computing Machinery, New York, NY, USA, 198–207. https://doi.org/10.1145/3563357.3564079
  17. Electricity Map. [n. d.]. Electricity Map. https://app.electricitymaps.com/map
  18. Efficient Neural Architecture Search via Parameter Sharing. In International Conference on Machine Learning(ICML).
  19. Regularized Evolution for Image Classifier Architecture Search. In Association for the Advancement of Artificial Intelligence(AAAI).
  20. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243 (2019).
  21. Learning search space partition for black-box optimization using monte carlo tree search. Advances in Neural Information Processing Systems 33 (2020), 19511–19522.
  22. Sample-Efficient Neural Architecture Search by Learning Action Space. CoRR abs/1906.06832 (2019). arXiv:1906.06832 http://arxiv.org/abs/1906.06832
  23. AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search. CoRR abs/1903.11059 (2019). arXiv:1903.11059 http://arxiv.org/abs/1903.11059
  24. NAS-FCOS: Fast Neural Architecture Search for Object Detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
  25. KNAS: green neural architecture search. In International Conference on Machine Learning. PMLR, 11613–11625.
  26. {PC}-{DARTS}: Partial Channel Connections for Memory-Efficient Architecture Search. In International Conference on Learning Representations. https://openreview.net/forum?id=BJlS634tPr
  27. NAS-Bench-101: Towards Reproducible Neural Architecture Search. In Proceedings of the 36th International Conference on Machine Learning.
  28. Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks. In International Conference on Learning Representations. https://openreview.net/forum?id=OnpFa95RVqs
  29. Few-Shot Neural Architecture Search. In Proceedings of the 38th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 139). PMLR, 12707–12718. http://proceedings.mlr.press/v139/zhao21d.html
  30. Multi-objective Optimization by Learning Space Partition. In International Conference on Learning Representations. https://openreview.net/forum?id=FlwzVjfMryn
  31. Learning Transferable Architectures for Scalable Image Recognition. In Conference on Computer Vision and Pattern Recognition (CVPR).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yiyang Zhao (13 papers)
  2. Tian Guo (48 papers)
Citations (1)