Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A 0.086-mm$^2$ 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28nm CMOS (1804.07858v3)

Published 20 Apr 2018 in cs.ET

Abstract: Shifting computing architectures from von Neumann to event-based spiking neural networks (SNNs) uncovers new opportunities for low-power processing of sensory data in applications such as vision or sensorimotor control. Exploring roads toward cognitive SNNs requires the design of compact, low-power and versatile experimentation platforms with the key requirement of online learning in order to adapt and learn new features in uncontrolled environments. However, embedding online learning in SNNs is currently hindered by high incurred complexity and area overheads. In this work, we present ODIN, a 0.086-mm$2$ 64k-synapse 256-neuron online-learning digital spiking neuromorphic processor in 28nm FDSOI CMOS achieving a minimum energy per synaptic operation (SOP) of 12.7pJ. It leverages an efficient implementation of the spike-driven synaptic plasticity (SDSP) learning rule for high-density embedded online learning with only 0.68$\mu$m$2$ per 4-bit synapse. Neurons can be independently configured as a standard leaky integrate-and-fire (LIF) model or as a custom phenomenological model that emulates the 20 Izhikevich behaviors found in biological spiking neurons. Using a single presentation of 6k 16$\times$16 MNIST training images to a single-layer fully-connected 10-neuron network with on-chip SDSP-based learning, ODIN achieves a classification accuracy of 84.5% while consuming only 15nJ/inference at 0.55V using rank order coding. ODIN thus enables further developments toward cognitive neuromorphic devices for low-power, adaptive and low-cost processing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Charlotte Frenkel (22 papers)
  2. Martin Lefebvre (8 papers)
  3. Jean-Didier Legat (3 papers)
  4. David Bol (16 papers)
Citations (229)

Summary

Analysis of ODIN: A Digital Spiking Neuromorphic Processor

The paper presents ODIN, a digital spiking neuromorphic processor designed for high-density, low-power neuromorphic computation. It leverages a 28nm FDSOI CMOS technology to integrate a substantial number of neurons and synapses within a minuscule die area, optimizing for energy efficiency and adaptability. The design of ODIN addresses a critical challenge in neuromorphic engineering: embedding online learning in spiking neural networks (SNNs) to enable real-time adaptation and learning in dynamically changing environments.

Architectural and Implementation Highlights

ODIN comprises 256 neurons and 64k synapses within a die area of 0.086 mm². The processor implements an event-driven architecture, emphasizing sparse computation that is highly compatible with IoT applications. The architecture supports configurable neuron models, allowing for both standard Leaky Integrate-and-Fire (LIF) neurons and custom neurons capable of emulating the full range of 20 behaviors defined by Izhikevich. This versatility is augmented by the integration of spike-driven synaptic plasticity (SDSP), enabling efficient local learning with minimized area overhead per synapse.

ODIN's synaptic operations are executed with an energy consumption of 12.7 pJ, signifying a high degree of power efficiency in scaled technology. Moreover, ODIN supports online learning through an efficient digital implementation of the SDSP rule, ensuring high-density synaptic integration without significant power trade-offs.

Performance and Learning Capabilities

The processor's ability to perform online learning is validated through on-chip experiments on the MNIST dataset. Using a single-layer fully-connected network of 10 neurons equipped with SDSP-based online learning, ODIN achieves a satisfactory classification accuracy of 84.5% with a mere 15 nJ per inference. These results, although not competitive with state-of-the-art neural networks, demonstrate the feasibility of on-chip learning with minimal hardware for specific low-power applications.

ODIN also supports offline learning paradigms through quantization-aware stochastic gradient descent, achieving 91.9% accuracy with pre-trained weights. This flexibility shows the processor's capability to adapt to different application needs, utilizing either embedded learning for continuous adaptation in dynamic environments or leveraging pre-trained models for higher accuracy demands.

Implications and Future Directions

ODIN exemplifies a successful fusion of CMOS technology advances with neuromorphic design principles, particularly in achieving area and energy efficiency. Its architecture suggests promising applications for distributed sensory processing tasks in edge computing and autonomous systems, where real-time learning and low power consumption are imperative. This work, while catering to present IoT requirements, highlights the potential for scaling and integrating such systems into broader, more complex neural networks.

Future research could explore scaling the architecture for larger networks, investigating the scalability of the communication infrastructure and synapse/neuron density. Furthermore, optimizing the balance between synaptic plasticity and computational complexity could enhance the processor's applicability to more demanding neuromorphic tasks. Additionally, leveraging the FDSOI technology for even lower voltage operations could extend ODIN's efficiency and performance envelope, reinforcing the role of spiking neuromorphic processors as a viable alternative to traditional von Neumann architectures in specific domains.