Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Network Training on In-memory-computing Hardware with Radix-4 Gradients (2203.04821v2)

Published 9 Mar 2022 in eess.SY and cs.SY

Abstract: Deep learning training involves a large number of operations, which are dominated by high dimensionality Matrix-Vector Multiplies (MVMs). This has motivated hardware accelerators to enhance compute efficiency, but where data movement and accessing are proving to be key bottlenecks. In-Memory Computing (IMC) is an approach with the potential to overcome this, whereby computations are performed in-place within dense 2-D memory. However, IMC fundamentally trades efficiency and throughput gains for dynamic-range limitations, raising distinct challenges for training, where compute precision requirements are seen to be substantially higher than for inferencing. This paper explores training on IMC hardware by leveraging two recent developments: (1) a training algorithm enabling aggressive quantization through a radix-4 number representation; (2) IMC leveraging compute based on precision capacitors, whereby analog noise effects can be made well below quantization effects. Energy modeling calibrated to a measured silicon prototype implemented in 16nm CMOS shows that energy savings of over 400x can be achieved with full quantizer adaptability, where all training MVMs can be mapped to IMC, and 3x can be achieved for two-level quantizer adaptability, where two of the three training MVMs can be mapped to IMC.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Christopher Grimm (12 papers)
  2. Naveen Verma (10 papers)
Citations (4)