Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FL-NAS: Towards Fairness of NAS for Resource Constrained Devices via Large Language Models (2402.06696v1)

Published 9 Feb 2024 in cs.LG and cs.AI

Abstract: Neural Architecture Search (NAS) has become the de fecto tools in the industry in automating the design of deep neural networks for various applications, especially those driven by mobile and edge devices with limited computing resources. The emerging LLMs, due to their prowess, have also been incorporated into NAS recently and show some promising results. This paper conducts further exploration in this direction by considering three important design metrics simultaneously, i.e., model accuracy, fairness, and hardware deployment efficiency. We propose a novel LLM-based NAS framework, FL-NAS, in this paper, and show experimentally that FL-NAS can indeed find high-performing DNNs, beating state-of-the-art DNN models by orders-of-magnitude across almost all design considerations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ruiyang Qin (15 papers)
  2. Yuting Hu (41 papers)
  3. Zheyu Yan (23 papers)
  4. Jinjun Xiong (118 papers)
  5. Ahmed Abbasi (20 papers)
  6. Yiyu Shi (136 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com