Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
15 tokens/sec
GPT-5 High Premium
23 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
466 tokens/sec
Kimi K2 via Groq Premium
201 tokens/sec
2000 character limit reached

TReCiM: Lower Power and Temperature-Resilient Multibit 2FeFET-1T Compute-in-Memory Design (2501.01052v1)

Published 2 Jan 2025 in cs.ET

Abstract: Compute-in-memory (CiM) emerges as a promising solution to solve hardware challenges in AI and the Internet of Things (IoT), particularly addressing the "memory wall" issue. By utilizing nonvolatile memory (NVM) devices in a crossbar structure, CiM efficiently accelerates multiply-accumulate (MAC) computations, the crucial operations in neural networks and other AI models. Among various NVM devices, Ferroelectric FET (FeFET) is particularly appealing for ultra-low-power CiM arrays due to its CMOS compatibility, voltage-driven write/read mechanisms and high ION/IOFF ratio. Moreover, subthreshold-operated FeFETs, which operate at scaling voltages in the subthreshold region, can further minimize the power consumption of CiM array. However, subthreshold-FeFETs are susceptible to temperature drift, resulting in computation accuracy degradation. Existing solutions exhibit weak temperature resilience at larger array size and only support 1-bit. In this paper, we propose TReCiM, an ultra-low-power temperature-resilient multibit 2FeFET-1T CiM design that reliably performs MAC operations in the subthreshold-FeFET region with temperature ranging from 0 to 85 degrees Celcius at scale. We benchmark our design using NeuroSim framework in the context of VGG-8 neural network architecture running the CIFAR-10 dataset. Benchmarking results suggest that when considering temperature drift impact, our proposed TReCiM array achieves 91.31% accuracy, with 1.86% accuracy improvement compared to existing 1-bit 2T-1FeFET CiM array. Furthermore, our proposed design achieves 48.03 TOPS/W energy efficiency at system level, comparable to existing designs with smaller technology feature sizes.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube