Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

In-memory multiplication engine with SOT-MRAM based stochastic computing (1809.08358v1)

Published 22 Sep 2018 in cs.AR

Abstract: Processing-in-memory (PIM) turns out to be a promising solution to breakthrough the memory wall and the power wall. While prior PIM designs yield successful implementation of bitwise Boolean logic operations locally in memory, it is difficult to accomplish the multiplication (MUL) instruction in a fast and efficient manner. In this paper, we propose a new stochastic computing (SC) design to perform MUL with in-memory operations. Instead of using the stochastic number generators (SNGs), we harness the inherent stochasticity in the memory write behavior of the magnetic random access memory (MRAM). Each memory bit serves as an SC engine, performs MUL on operands in the form of write voltage pulses, and stores the MUL outcome in-situ. The proposed design provides up to 4x improvement in performance compared with conversational SC approaches, and achieves 18x speedup over implementing MUL with only in-memory bitwise Boolean logic operations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xin Ma (106 papers)
  2. Liang Chang (50 papers)
  3. Shuangchen Li (4 papers)
  4. Lei Deng (81 papers)
  5. Yufei Ding (81 papers)
  6. Yuan Xie (188 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.