Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LCP: A Low-Communication Parallelization Method for Fast Neural Network Inference in Image Recognition (2003.06464v2)

Published 13 Mar 2020 in eess.SP and cs.LG

Abstract: Deep neural networks (DNNs) have inspired new studies in myriad edge applications with robots, autonomous agents, and Internet-of-things (IoT) devices. However, performing inference of DNNs in the edge is still a severe challenge, mainly because of the contradiction between the intensive resource requirements of DNNs and the tight resource availability in several edge domains. Further, as communication is costly, taking advantage of other available edge devices by using data- or model-parallelism methods is not an effective solution. To benefit from available compute resources with low communication overhead, we propose the first DNN parallelization method for reducing the communication overhead in a distributed system. We propose a low-communication parallelization (LCP) method in which models consist of several almost-independent and narrow branches. LCP offers close-to-minimum communication overhead with better distribution and parallelization opportunities while significantly reducing memory footprint and computation compared to data- and model-parallelism methods. We deploy LCP models on three distributed systems: AWS instances, Raspberry Pis, and PYNQ boards. We also evaluate the performance of LCP models on a customized hardware (tailored for low latency) implemented on a small edge FPGA and as a 16mW 0.107mm2 ASIC @7nm chip. LCP models achieve a maximum and average speedups of 56x and 7x, compared to the originals, which could be improved by up to an average speedup of 33x by incorporating common optimizations such as pruning and quantization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Ramyad Hadidi (15 papers)
  2. Bahar Asgari (10 papers)
  3. Jiashen Cao (8 papers)
  4. Younmin Bae (1 paper)
  5. Da Eun Shim (2 papers)
  6. Hyojong Kim (3 papers)
  7. Sung-Kyu Lim (3 papers)
  8. Michael S. Ryoo (75 papers)
  9. Hyesoon Kim (27 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.