Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLIM: Simultaneous Logic-in-Memory Computing Exploiting Bilayer Analog OxRAM Devices (1811.05772v2)

Published 14 Nov 2018 in cs.ET

Abstract: Von Neumann architecture based computers isolate/physically separate computation and storage units i.e. data is shuttled between computation unit (processor) and memory unit to realize logic/ arithmetic and storage functions. This to-and-fro movement of data leads to a fundamental limitation of modern computers, known as the memory wall. Logic in-Memory (LIM) approaches aim to address this bottleneck by computing inside the memory units and thereby eliminating the energy-intensive and time-consuming data movement. However, most LIM approaches reported in literature are not truly "simultaneous" as during LIM operation the bitcell can be used only as a Memory cell or only as a Logic cell. The bitcell is not capable of storing both the Memory/Logic outputs simultaneously. Here, we propose a novel 'Simultaneous Logic in-Memory' (SLIM) methodology that allows to implement both Memory and Logic operations simultaneously on the same bitcell in a non-destructive manner without losing the previously stored Memory state. Through extensive experiments we demonstrate the SLIM methodology using non-filamentary bilayer analog OxRAM devices with NMOS transistors (2T-1R bitcell). Detailed programming scheme, array level implementation and controller architecture are also proposed. Furthermore, to study the impact of introducing SLIM array in the memory hierarchy, a simple image processing application (edge detection) is also investigated. It has been estimated that by performing all computations inside the SLIM array, the total Energy Delay Product (EDP) reduces by ~ 40x in comparison to a modern-day computer. EDP saving owing to reduction in data transfer between CPU Memory is observed to be ~ 780x.

Citations (449)

Summary

  • The paper introduces a SLIM methodology that enables simultaneous logic and memory operations in a single 2T-1R bitcell using bilayer analog OxRAM devices.
  • It demonstrates a 40× improvement in the Energy Delay Product for in-memory computing and up to 780× savings by reducing CPU-memory data transfers.
  • The approach challenges conventional von Neumann architectures, offering a viable path for energy-efficient computing in real-time data processing and AI applications.

Simultaneous Logic-in-Memory Computing Using Bilayer Analog OxRAM Devices

This paper introduces a novel approach in the field of Logic-in-Memory (LIM) computing with the proposed Simultaneous Logic-in-Memory (SLIM) methodology. The SLIM approach is implemented using non-filamentary bilayer analog OxRAM devices combined with NMOS transistors, forming a 2T-1R bitcell which simultaneously allows both logic and memory operations. This marks a departure from existing LIM systems, which typically necessitate either memory or logic, but not both at the same time.

Key Results and Methodological Advances

The paper presents notable experimental results, demonstrating that the SLIM methodology can simultaneously execute logic operations, store the resultant logic, and preserve the pre-existing memory state in the same bitcell. The implementation of NOR logic operations validate the efficacy of this approach. A standout metric from the experimental results is the estimated 40x improvement in the Energy Delay Product (EDP) when computations are performed within the SLIM array compared to a standard modern computer.

Furthermore, by reducing data transfer between the CPU and memory, an EDP saving of approximately 780x is observed. These outcomes are ascribed to in-memory computing, effectively showcasing the potential of SLIM to substantially lower latency and energy consumption associated with data transfer between separate computing and memory units in traditional architectures.

Implications and Future Directions

The implications of SLIM extend across both theoretical and practical dimensions of computing. Theoretically, it challenges the prevailing paradigms of architecture by demonstrating that it is feasible to conduct simultaneous operations of storage and logic within the same memory bitcell using OxRAM devices. Practically, the adoption of SLIM technology could lead to significant energy and speed improvements across computational applications, particularly in areas requiring large-scale data processing and real-time computation such as image processing or neural network inference.

Future research and developments could explore optimizing the SLIM architecture further by integrating more advanced oxide materials and sub-10nm technology nodes to achieve better scalability and lower operation voltages. The paper also suggests examining a potential path for integrating SLIM arrays into commercialized multi-level cell (MLC) SSDs, positing a scenario where storage resources could perform computational functions without compromising user data.

Conclusion

This paper provides a comprehensive investigation of the SLIM methodology, establishing its viability as a true simultaneous logic-in-memory system. By leveraging bilayer analog OxRAM devices, the approach achieves concurrent logic and memory operations, culminating in meaningful EDP gains and reduced data transfer overheads. These advances not only offer a promising direction for overcoming the "memory wall" in von Neumann architectures but also propose an efficient pathway toward energy-efficient computing systems. As SLIM methodologies evolve, their incorporation into various domains of AI and computing could signal a formidable shift in how logic and memory are intertwined within computational frameworks, pushing the boundaries of current technology limits.