Backpropagation-free Spiking Neural Networks with the Forward-Forward Algorithm (2502.20411v2)
Abstract: Spiking Neural Networks (SNNs) offer a biologically inspired computational paradigm that emulates neuronal activity through discrete spike-based processing. Despite their advantages, training SNNs with traditional backpropagation (BP) remains challenging due to computational inefficiencies and a lack of biological plausibility. This study explores the Forward-Forward (FF) algorithm as an alternative learning framework for SNNs. Unlike backpropagation, which relies on forward and backward passes, the FF algorithm employs two forward passes, enabling layer-wise localized learning, enhanced computational efficiency, and improved compatibility with neuromorphic hardware. We introduce an FF-based SNN training framework and evaluate its performance across both non-spiking (MNIST, Fashion-MNIST, Kuzushiji-MNIST) and spiking (Neuro-MNIST, SHD) datasets. Experimental results demonstrate that our model surpasses existing FF-based SNNs on evaluated static datasets with a much lighter architecture while achieving accuracy comparable to state-of-the-art backpropagation-trained SNNs. On more complex spiking tasks such as SHD, our approach outperforms other SNN models and remains competitive with leading backpropagation-trained SNNs. These findings highlight the FF algorithm's potential to advance SNN training methodologies by addressing some key limitations of backpropagation.
- Mohammadnavid Ghader (1 paper)
- Saeed Reza Kheradpisheh (20 papers)
- Bahar Farahani (3 papers)
- Mahmood Fazlali (2 papers)