Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

End-to-end 100-TOPS/W Inference With Analog In-Memory Computing: Are We There Yet? (2109.01404v1)

Published 3 Sep 2021 in cs.AR

Abstract: In-Memory Acceleration (IMA) promises major efficiency improvements in deep neural network (DNN) inference, but challenges remain in the integration of IMA within a digital system. We propose a heterogeneous architecture coupling 8 RISC-V cores with an IMA in a shared-memory cluster, analyzing the benefits and trade-offs of in-memory computing on the realistic use case of a MobileNetV2 bottleneck layer. We explore several IMA integration strategies, analyzing performance, area, and energy efficiency. We show that while pointwise layers achieve significant speed-ups over software implementation, on depthwise layer the inability to efficiently map parameters on the accelerator leads to a significant trade-off between throughput and area. We propose a hybrid solution where pointwise convolutions are executed on IMA while depthwise on the cluster cores, achieving a speed-up of 3x over SW execution while saving 50% of area when compared to an all-in IMA solution with similar performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Gianmarco Ottavi (11 papers)
  2. Geethan Karunaratne (25 papers)
  3. Francesco Conti (67 papers)
  4. Irem Boybat (22 papers)
  5. Luca Benini (362 papers)
  6. Davide Rossi (69 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.