Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inference Time Optimization Using BranchyNet Partitioning (2005.04099v2)

Published 1 May 2020 in cs.DC and eess.SP

Abstract: Deep Neural Network (DNN) applications with edge computing presents a trade-off between responsiveness and computational resources. On one hand, edge computing can provide high responsiveness deploying computational resources close to end devices, which may be prohibitive for the majority of cloud computing services. On the other hand, DNN inference requires computational power to be executed, which may not be available on edge devices, but a cloud server can provide it. To solve this problem (trade-off), we partition a DNN between edge device and cloud server, which means the first DNN layers are processed at the edge and the other layers at the cloud. This paper proposes an optimal partition of DNN, according to network bandwidth, computational resources of edge and cloud, and parameter inherent to data. Our proposal aims to minimize the inference time, to allow high responsiveness applications. To this end, we show the equivalency between DNN partitioning problem and shortest path problem to find an optimal solution, using Dijkstra's algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Roberto G. Pacheco (4 papers)
  2. Rodrigo S. Couto (4 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.