Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator (2106.09144v1)

Published 16 Jun 2021 in cs.AR, cs.ET, and cs.LG

Abstract: Recent works demonstrated the promise of using resistive random access memory (ReRAM) as an emerging technology to perform inherently parallel analog domain in-situ matrix-vector multiplication -- the intensive and key computation in DNNs. With weights stored in the ReRAM crossbar cells as conductance, when the input vector is applied to word lines, the matrix-vector multiplication results can be generated as the current in bit lines. A key problem is that the weight can be either positive or negative, but the in-situ computation assumes all cells on each crossbar column with the same sign. The current architectures either use two ReRAM crossbars for positive and negative weights, or add an offset to weights so that all values become positive. Neither solution is ideal: they either double the cost of crossbars, or incur extra offset circuity. To better solve this problem, this paper proposes FORMS, a fine-grained ReRAM-based DNN accelerator with polarized weights. Instead of trying to represent the positive/negative weights, our key design principle is to enforce exactly what is assumed in the in-situ computation -- ensuring that all weights in the same column of a crossbar have the same sign. It naturally avoids the cost of an additional crossbar. Such weights can be nicely generated using alternating direction method of multipliers (ADMM) regularized optimization, which can exactly enforce certain patterns in DNN weights. To achieve high accuracy, we propose to use fine-grained sub-array columns, which provide a unique opportunity for input zero-skipping, significantly avoiding unnecessary computations. It also makes the hardware much easier to implement. Putting all together, with the same optimized models, FORMS achieves significant throughput improvement and speed up in frame per second over ISAAC with similar area cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Geng Yuan (58 papers)
  2. Payman Behnam (11 papers)
  3. Zhengang Li (31 papers)
  4. Ali Shafiee (7 papers)
  5. Sheng Lin (29 papers)
  6. Xiaolong Ma (57 papers)
  7. Hang Liu (135 papers)
  8. Xuehai Qian (40 papers)
  9. Mahdi Nazm Bojnordi (5 papers)
  10. Yanzhi Wang (197 papers)
  11. Caiwen Ding (98 papers)
Citations (59)

Summary

FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator

The FORMS paper addresses several critical issues concerning Resistive Random Access Memory (ReRAM)-based accelerators for Deep Neural Network (DNN) computation. Existing architectures like ISAAC and PRIME face substantial hardware cost challenges due to the requirement of representing both positive and negative weights separately or modifying weight values to become positive. While these approaches incur additional costs, FORMS proposes a solution through fine-grained polarized weight representation to achieve efficiency without doubling the ReRAM hardware cost or requiring extra offset circuits.

Key Concepts and Innovations

  • Polarized Weight Representation: Unlike conventional methods requiring separate crossbars for positive and negative weights, the FORMS architecture polarizes weights stored in ReRAM crossbars. This approach simplifies hardware requirements by imposing a constraint where entire columns within crossbar sub-arrays hold weights of the same sign, i.e., positive or negative.
  • Algorithm-Hardware Co-design: FORMS leverages Alternating Direction Method of Multipliers (ADMM) for training the DNN to enforce polarized weight constraints effectively. By integrating the polarization constraint into the training process, the model produces weights consistent with the assumed conditions within the ReRAM crossbar, optimizing performance without sacrificing accuracy.
  • Fine-Grained Architecture: The design partitions crossbars into smaller logical sub-arrays allowing for targeted optimizations, such as input zero-skipping. This technique skips bits that are zero across all inputs within a fragment, thus reducing computational cycles significantly. Fine-grained sub-arrays also enhance the feasibility and performance of peripheral AD/DA converters and provide robustness against non-idealities and noise.

Numerical Results and Claims

FORMS exhibits compelling numerical improvements:

  • With the same DNN models, FORMS demonstrates a throughput enhancement by factors of 1.50× and 1.93× in terms of $\frac{GOPs}{s \times mm^{2}$ and GOPsW\frac{GOPs}{W} respectively, compared to ISAAC.
  • Notably, FORMS achieves an acceleration ranging from 1.12× to 2.4× in terms of frames per second (FPS), utilizing the same power and area constraints.

Implications and Future Directions

The results underscore the potential of algorithm-hardware integration in advancing DNN accelerators, particularly through innovations in memory technology like ReRAM. FORMS establishes a blueprint for designing efficient and scalable architectures, crucial as applications extend from edge computing to comprehensive AI models. Moving forward, the focus on co-design will be paramount, suggesting refinement in techniques such as ADMM and further exploration into emerging memory technologies.

Other implications include opportunities for enhancing robustness in memory architectures prone to variations in manufacturing processes. Additionally, in light of the limitations in scaling traditional CMOS technology, insights from this work could inform future pursuits in alternative materials and device innovations. The continued investigation into efficient computing paradigms, such as mixed-signal processing, serves as a promising frontier for achieving greater power and area efficiency while maintaining computational integrity across different AI workloads.

Youtube Logo Streamline Icon: https://streamlinehq.com