Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

End-to-End DNN Inference on a Massively Parallel Analog In Memory Computing Architecture (2211.12877v1)

Published 23 Nov 2022 in cs.DC

Abstract: The demand for computation resources and energy efficiency of Convolutional Neural Networks (CNN) applications requires a new paradigm to overcome the "Memory Wall". Analog In-Memory Computing (AIMC) is a promising paradigm since it performs matrix-vector multiplications, the critical kernel of many ML applications, in-place in the analog domain within memory arrays structured as crossbars of memory cells. However, several factors limit the full exploitation of this technology, including the physical fabrication of the crossbar devices, which constrain the memory capacity of a single array. Multi-AIMC architectures have been proposed to overcome this limitation, but they have been demonstrated only for tiny and custom CNNs or performing some layers off-chip. In this work, we present the full inference of an end-to-end ResNet-18 DNN on a 512-cluster heterogeneous architecture coupling a mix of AIMC cores and digital RISC-V cores, achieving up to 20.2 TOPS. Moreover, we analyze the mapping of the network on the available non-volatile cells, compare it with state-of-the-art models, and derive guidelines for next-generation many-core architectures based on AIMC devices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Nazareno Bruschi (7 papers)
  2. Giuseppe Tagliavini (21 papers)
  3. Angelo Garofalo (33 papers)
  4. Francesco Conti (67 papers)
  5. Irem Boybat (22 papers)
  6. Luca Benini (362 papers)
  7. Davide Rossi (69 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.