Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixed-precision deep learning based on computational memory (2001.11773v1)

Published 31 Jan 2020 in cs.ET

Abstract: Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 173x improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. S. R. Nandakumar (7 papers)
  2. Manuel Le Gallo (33 papers)
  3. Christophe Piveteau (13 papers)
  4. Vinay Joshi (8 papers)
  5. Giovanni Mariani (6 papers)
  6. Irem Boybat (22 papers)
  7. Geethan Karunaratne (25 papers)
  8. Riduan Khaddam-Aljameh (2 papers)
  9. Urs Egger (3 papers)
  10. Anastasios Petropoulos (5 papers)
  11. Theodore Antonakopoulos (3 papers)
  12. Bipin Rajendran (50 papers)
  13. Abu Sebastian (67 papers)
  14. Evangelos Eleftheriou (23 papers)
Citations (75)

Summary

We haven't generated a summary for this paper yet.