Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Architecture Search for Improving Latency-Accuracy Trade-off in Split Computing (2208.13968v1)

Published 30 Aug 2022 in cs.LG and cs.DC

Abstract: This paper proposes a neural architecture search (NAS) method for split computing. Split computing is an emerging machine-learning inference technique that addresses the privacy and latency challenges of deploying deep learning in IoT systems. In split computing, neural network models are separated and cooperatively processed using edge servers and IoT devices via networks. Thus, the architecture of the neural network model significantly impacts the communication payload size, model accuracy, and computational load. In this paper, we address the challenge of optimizing neural network architecture for split computing. To this end, we proposed NASC, which jointly explores optimal model architecture and a split point to achieve higher accuracy while meeting latency requirements (i.e., smaller total latency of computation and communication than a certain threshold). NASC employs a one-shot NAS that does not require repeating model training for a computationally efficient architecture search. Our performance evaluation using hardware (HW)-NAS-Bench of benchmark data demonstrates that the proposed NASC can improve the ``communication latency and model accuracy" trade-off, i.e., reduce the latency by approximately 40-60% from the baseline, with slight accuracy degradation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shoma Shimizu (1 paper)
  2. Takayuki Nishio (43 papers)
  3. Shota Saito (31 papers)
  4. Yoichi Hirose (3 papers)
  5. Chen Yen-Hsiu (1 paper)
  6. Shinichi Shirakawa (25 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.