Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed Neural Network (2306.09552v1)

Published 15 Jun 2023 in cs.AR

Abstract: EIE proposed to accelerate pruned and compressed neural networks, exploiting weight sparsity, activation sparsity, and 4-bit weight-sharing in neural network accelerators. Since published in ISCA'16, it opened a new design space to accelerate pruned and sparse neural networks and spawned many algorithm-hardware co-designs for model compression and acceleration, both in academia and commercial AI chips. In retrospect, we review the background of this project, summarize the pros and cons, and discuss new opportunities where pruning, sparsity, and low precision can accelerate emerging deep learning workloads.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Song Han (155 papers)
  2. Xingyu Liu (56 papers)
  3. Huizi Mao (13 papers)
  4. Jing Pu (7 papers)
  5. Ardavan Pedram (9 papers)
  6. Mark A. Horowitz (3 papers)
  7. William J. Dally (21 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.