Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-Trained Memristive Crossbar Computing Engine with In-Situ Learning Accelerator (1808.07329v1)

Published 22 Aug 2018 in cs.ET

Abstract: On-device intelligence is gaining significant attention recently as it offers local data processing and low power consumption. In this research, an on-device training circuitry for threshold-current memristors integrated in a crossbar structure is proposed. Furthermore, alternate approaches of mapping the synaptic weights into fully-trained and semi-trained crossbars are investigated. In a semi-trained crossbar a confined subset of memristors are tuned and the remaining subset of memristors are not programmed. This translates to optimal resource utilization and power consumption, compared to a fully programmed crossbar. The semi-trained crossbar architecture is applicable to a broad class of neural networks. System level verification is performed with an extreme learning machine for binomial and multinomial classification. The total power for a single 4x4 layer network, when implemented in IBM 65nm node, is estimated to be ~ 42.16uW and the area is estimated to be 26.48um x 22.35um.

Citations (6)

Summary

We haven't generated a summary for this paper yet.