Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks (2007.05785v5)

Published 11 Jul 2020 in cs.NE, cs.CV, and cs.LG

Abstract: Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility. However, the formulation of efficient and high-performance learning algorithms for SNNs is still challenging. Most existing learning methods learn weights only, and require manual tuning of the membrane-related parameters that determine the dynamics of a single spiking neuron. These parameters are typically chosen to be the same for all neurons, which limits the diversity of neurons and thus the expressiveness of the resulting SNNs. In this paper, we take inspiration from the observation that membrane-related parameters are different across brain regions, and propose a training algorithm that is capable of learning not only the synaptic weights but also the membrane time constants of SNNs. We show that incorporating learnable membrane time constants can make the network less sensitive to initial values and can speed up learning. In addition, we reevaluate the pooling methods in SNNs and find that max-pooling will not lead to significant information loss and have the advantage of low computation cost and binary compatibility. We evaluate the proposed method for image classification tasks on both traditional static MNIST, Fashion-MNIST, CIFAR-10 datasets, and neuromorphic N-MNIST, CIFAR10-DVS, DVS128 Gesture datasets. The experiment results show that the proposed method outperforms the state-of-the-art accuracy on nearly all datasets, using fewer time-steps. Our codes are available at https://github.com/fangwei123456/Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron.

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks

The paper "Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks" explores a novel enhancement in the domain of Spiking Neural Networks (SNNs) by integrating learnable membrane time constants into their architectural framework. This approach contrasts with traditional SNN methodologies, which generally confine their adaptability to synaptic weights while keeping membrane-related parameters static and homogenous across neurons.

To address this limitation, the authors propose a backpropagation-based learning algorithm that incorporates both synaptic weights and variational membrane time constants in a spiking neuron model termed the Parametric Leaky Integrate-and-Fire (PLIF) neuron. This innovation allows for more nuanced modeling of neuronal dynamics and enhances the ability of SNNs to learn diverse temporal patterns. Specifically, membrane time constants are treated as shared parameters within individual layers, yet are distinct across different layers, thereby mimicking the heterogeneity observed in biological neurons across various brain regions. The algorithm's flexibility allows it to optimize these constants during learning, which helps the network become less sensitive to initial conditions and can accelerate learning.

The efficacy of this approach is empirically evaluated on several datasets, including both traditional static datasets (MNIST, Fashion-MNIST, CIFAR-10) and neuromorphic datasets (N-MNIST, CIFAR10-DVS, DVS128 Gesture). Results demonstrate state-of-the-art performance, often outperforming existing methods using fewer time-steps, which underscores the potential for significant improvements in computations both in terms of speed and power efficiency.

An additional contribution of the paper is the reevaluation of pooling methods within SNNs. The researchers find that max-pooling does not inherently lead to significant information loss as previously suggested. Instead, it offers advantages in computational efficiency and maintains compatibility with the binary nature of spike data. This insight reinforces the potential for optimizing pooling strategies to preserve neural network robustness while minimizing resource demands.

The implications of incorporating learnable membrane time constants are profound. Theoretically, it suggests a more biologically realistic model by allowing neurons within a network to adaptively change their response characteristics akin to biological variability. Practically, this methodology could lead to more efficient SNNs that can run real-time on energy-constrained devices—an ideal trait for practical deployments in edge computing scenarios or neuromorphic hardware.

Future directions may explore the scalability of PLIF models in larger, more complex networks, particularly in the intersection of hybrid models that incorporate both spiking and non-spiking components. Furthermore, examining learnable combinations of other neuro-dynamical parameters could unearth additional insights and efficiencies.

Overall, this paper presents a comprehensive framework that not only challenges the conventional understanding of spiking network training but also opens new avenues for computationally efficient and biologically plausible models in artificial intelligence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wei Fang (98 papers)
  2. Zhaofei Yu (61 papers)
  3. Yanqi Chen (9 papers)
  4. Tiejun Huang (130 papers)
  5. Yonghong Tian (184 papers)
  6. Timothee Masquelier (5 papers)
Citations (450)