Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Memristors -- from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing (2004.14942v1)

Published 30 Apr 2020 in cs.ET and cs.NE

Abstract: Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence. Deep learning is based on computational models that are, to a certain extent, bio-inspired, as they rely on networks of connected simple computing units operating in parallel. Deep learning has been successfully applied in areas such as object/pattern recognition, speech and natural language processing, self-driving vehicles, intelligent self-diagnostics tools, autonomous robots, knowledgeable personal assistants, and monitoring. These successes have been mostly supported by three factors: availability of vast amounts of data, continuous growth in computing power, and algorithmic innovations. The approaching demise of Moore's law, and the consequent expected modest improvements in computing power that can be achieved by scaling, raise the question of whether the described progress will be slowed or halted due to hardware limitations. This paper reviews the case for a novel beyond CMOS hardware technology, memristors, as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks. Central themes are the reliance on non-von-Neumann computing architectures and the need for developing tailored learning and inference algorithms. To argue that lessons from biology can be useful in providing directions for further progress in artificial intelligence, we briefly discuss an example based reservoir computing. We conclude the review by speculating on the big picture view of future neuromorphic and brain-inspired computing systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Adnan Mehonic (9 papers)
  2. Abu Sebastian (67 papers)
  3. Bipin Rajendran (50 papers)
  4. Osvaldo Simeone (326 papers)
  5. Eleni Vasilaki (23 papers)
  6. Anthony J. Kenyon (33 papers)
Citations (179)

Summary

  • The paper shows that memristors overcome the von Neumann bottleneck by integrating computing and storage to reduce latency and energy use.
  • It details how memristors map neural network weights and mimic synaptic functions to accelerate deep learning and power spiking neural networks.
  • The research underscores the need for interdisciplinary advances to scale memristive systems for future neuromorphic and bio-inspired computing.

Overview of Memristors in AI and Neuromorphic Computing

The paper "Memristors - from In-memory computing, Deep Learning Acceleration, Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired Computing" presents a comprehensive exploration into memristors as a promising technology for advancing AI hardware. Memristors, introduced by Leon Chua in 1971 and further developed in the late 2000s, are posited as potential successors to traditional CMOS technology, offering solutions that overcome the limitations posed by the von Neumann architecture.

Key Contributions and Findings

The research highlights the capabilities of memristors in facilitating in-memory computing (IMC), deep learning accelerators, and spiking neural networks (SNNs). The principal strength of memristors lies in their non-volatility, energy efficiency, and the ability to integrate memory and processing into a unified entity, which is pivotal in overcoming the von Neumann bottleneck.

In-Memory Computing: Memristors enable IMC by allowing data to be processed in the same location as it is stored, leveraging the device's inherent capability to perform matrix-vector multiplications using Ohm’s and Kirchhoff’s laws. This approach reduces latency and energy consumption significantly by obviating the need to shuttle data between separate processing and memory units.

Deep Learning Accelerators: Memristor-based accelerators offer considerable performance improvements for deep neural networks (DNNs) since these devices can directly map neural network synaptic weights to their conductance states. The reduction in data movement and the high-density integration potential of memristors facilitate efficient DNN inference and training processes.

Spiking Neural Networks: By mimicking the functionality of biological neurons and synapses, memristors are central to implementing power-efficient SNNs. This is accomplished by encoding information in spike timings, offering a shift from traditional rate-based coding. This capability, coupled with the ability to emulate synaptic plasticity mechanisms such as spike-timing-dependent plasticity (STDP), renders memristors uniquely suited for future neuromorphic computing systems.

Implications and Future Directions

The versatility of memristive devices is expected to extend beyond memory storage to become integral components of brain-inspired computing paradigms. However, the transition from silicon-based computing to memristive systems necessitates interdisciplinary research that spans materials science, device physics, computer science, and neuroscience. The development of bio-inspired algorithms that exploit memristors' unique switching properties could accelerate progress in the burgeoning field of neuromorphic computing.

Challenges persist, notably in the variability of device characteristics and the need for scalable fabrication processes. Addressing these will require advancements in device engineering and a deeper understanding of memristive dynamics. Additionally, the emergence of stochastic computing models leveraging intrinsic memristor randomness suggests potential in creating robust, efficient learning systems.

In conclusion, memristors present a compelling path forward for AI hardware evolution, offering transformative potential for energy-efficient, high-performance computing systems. They hold the promise not only as enablers of IMC but also as fundamental building blocks for creating sophisticated neuromorphic architectures that closely emulate human cognitive processes. Such advancements could redefine the landscape of artificial intelligence and computational neuroscience, fostering unprecedented capabilities in machine learning and bio-inspired computing systems.