Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 74 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Efficient Memristive Spiking Neural Networks Architecture with Supervised In-Situ STDP Method (2507.20998v1)

Published 28 Jul 2025 in cs.ET and cs.NE

Abstract: Memristor-based Spiking Neural Networks (SNNs) with temporal spike encoding enable ultra-low-energy computation, making them ideal for battery-powered intelligent devices. This paper presents a circuit-level memristive spiking neural network (SNN) architecture trained using a proposed novel supervised in-situ learning algorithm inspired by spike-timing-dependent plasticity (STDP). The proposed architecture efficiently implements lateral inhibition and the refractory period, eliminating the need for external microcontrollers or ancillary control hardware. All synapses of the winning neurons are updated in parallel, enhancing training efficiency. The modular design ensures scalability with respect to input data dimensions and output class count. The SNN is evaluated in LTspice for pattern recognition (using 5x3 binary images) and classification tasks using the Iris and Breast Cancer Wisconsin (BCW) datasets. During testing, the system achieved perfect pattern recognition and high classification accuracies of 99.11\% (Iris) and 97.9\% (BCW). Additionally, it has demonstrated robustness, maintaining an average recognition rate of 93.4\% under 20\% input noise. The impact of stuck-at-conductance faults and memristor device variations was also analyzed.

Summary

  • The paper presents a novel memristive SNN architecture that incorporates a supervised in-situ STDP algorithm for efficient neural training on hardware.
  • The architecture leverages a 1T1M crossbar design with lateral inhibition and dual switch control to enable ultra-low-energy pattern recognition and classification.
  • Robustness is demonstrated by achieving 93.4% accuracy under 20% input noise and maintaining performance despite various memristor faults and threshold variations.

Efficient Memristive Spiking Neural Networks Architecture with Supervised In-Situ STDP Method

Introduction

The paper discusses a memristor-based Spiking Neural Network (SNN) architecture designed for ultra-low-energy computation, which makes it suitable for battery-powered intelligent devices. The architecture utilizes a novel supervised in-situ learning algorithm inspired by spike-timing-dependent plasticity (STDP). Key features include lateral inhibition, a refractory period, and parallel synapse updating. These elements are incorporated to improve training efficiency and eliminate the need for external control hardware, thus facilitating scalability in terms of input and output data dimensions.

Architecture Overview

The architecture centers around memristive devices within an SNN framework, offering high-density integration and low power consumption. Memristors are essential circuit elements due to their variable resistance states similar to biological synapses. The proposed SNN architecture leverages a 1T1M memristor crossbar design that supports both pattern recognition and classification tasks without requiring external microcontrollers or ancillary hardware. Figure 1

Figure 1: Spiking neural network model. The features at the input layer are encoded into spikes and via synapses they travel to post-synaptic neurons at the output layer.

The architecture comprises several key components:

  • Temporal Spike Encoder: Converts input features into spikes using temporal encoding.
  • Dual Switches: Manage pre-synaptic and update spike flows into the memristive crossbar.
  • Leaky Integrate-and-Fire (LIF) Neurons: Generate post-spikes in response to weighted spike currents.
  • Control Circuits: Include lateral inhibition, synapse control, dual switch control, and update circuits for managing spike flow and synapse updates. Figure 2

    Figure 2: (a) Overview of the memristive SNN architecture. The arrows indicate the information flow. (b) Architectural layout of the memristive SNN.

Training Mechanism

The memristive SNN uses a hardware-friendly supervised in-situ STDP algorithm. Training is performed directly on the neuromorphic hardware to harness memristor fault tolerance and efficiency. The novel algorithm supervises weight changes based on spike-timing differences following STDP rules.

During training, a bias current ensures that a neuron associated with the correct label becomes the "winner," leading to a distinct weight adjustment pattern. Training outcomes are visualized using heat maps to demonstrate synaptic weight consistency post-training. Figure 3

Figure 3: (a) Binary training patterns, (b) heat map of final synaptic weights in μΩ1\mu \Omega^{-1}.

Robustness and Fault Analysis

The architecture demonstrates robustness against various noise levels, fault conditions, and device variations:

  • Noise Robustness: Achieves 93.4% accuracy even with 20% input noise.
  • Stuck-at-conductance Faults: Maintains high accuracy in scenarios where memristors are stuck.
  • Resistance and Threshold Variations: Analysis under varied boundary resistance and threshold voltage conditions indicates substantial resilience, albeit sensitivity to excessive threshold variations.

Scalability and Practical Implications

The architecture scales effectively for larger pattern sizes and more classes without compromising accuracy. By employing modular and scalable design principles, the architecture can be adapted to handle more complex datasets, exemplified by training on scaled binary patterns (digits 0-9) while maintaining efficient resource use. Figure 4

Figure 4: The final weights after training consist of digits 0 to 9 of size 7×37 \times 3.

Future Directions

Future work aims to explore the practical deployment of the proposed memristive SNN in edge devices, assess detailed energy consumption characteristics, and evaluate the architecture with extensive datasets for real-world applications.

Conclusion

The paper presents a memristive SNN architecture optimized for both classification and pattern recognition tasks, achieving high accuracies with significant efficiency and robustness. Its design facilitates scalability and robustness against noise and device variations, offering promise for future AI applications in energy-constrained environments.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube