Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM (1912.11516v1)

Published 24 Dec 2019 in cs.DC, cs.AR, cs.ET, and eess.SP

Abstract: The wide adoption of deep neural networks has been accompanied by ever-increasing energy and performance demands due to the expensive nature of training them. Numerous special-purpose architectures have been proposed to accelerate training: both digital and hybrid digital-analog using resistive RAM (ReRAM) crossbars. ReRAM-based accelerators have demonstrated the effectiveness of ReRAM crossbars at performing matrix-vector multiplication operations that are prevalent in training. However, they still suffer from inefficiency due to the use of serial reads and writes for performing the weight gradient and update step. A few works have demonstrated the possibility of performing outer products in crossbars, which can be used to realize the weight gradient and update step without the use of serial reads and writes. However, these works have been limited to low precision operations which are not sufficient for typical training workloads. Moreover, they have been confined to a limited set of training algorithms for fully-connected layers only. To address these limitations, we propose a bit-slicing technique for enhancing the precision of ReRAM-based outer products, which is substantially different from bit-slicing for matrix-vector multiplication only. We incorporate this technique into a crossbar architecture with three variants catered to different training algorithms. To evaluate our design on different types of layers in neural networks (fully-connected, convolutional, etc.) and training algorithms, we develop PANTHER, an ISA-programmable training accelerator with compiler support. Our evaluation shows that PANTHER achieves up to $8.02\times$, $54.21\times$, and $103\times$ energy reductions as well as $7.16\times$, $4.02\times$, and $16\times$ execution time reductions compared to digital accelerators, ReRAM-based accelerators, and GPUs, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Aayush Ankit (15 papers)
  2. Izzat El Hajj (17 papers)
  3. Sai Rahul Chalamalasetti (3 papers)
  4. Sapan Agarwal (15 papers)
  5. Matthew Marinella (1 paper)
  6. Martin Foltin (6 papers)
  7. John Paul Strachan (32 papers)
  8. Dejan Milojicic (14 papers)
  9. Wen-mei Hwu (62 papers)
  10. Kaushik Roy (265 papers)
Citations (64)

Summary

We haven't generated a summary for this paper yet.