Estimating Post-Synaptic Effects for Online Training of Feed-Forward SNNs (2311.16151v1)
Abstract: Facilitating online learning in spiking neural networks (SNNs) is a key step in developing event-based models that can adapt to changing environments and learn from continuous data streams in real-time. Although forward-mode differentiation enables online learning, its computational requirements restrict scalability. This is typically addressed through approximations that limit learning in deep models. In this study, we propose Online Training with Postsynaptic Estimates (OTPE) for training feed-forward SNNs, which approximates Real-Time Recurrent Learning (RTRL) by incorporating temporal dynamics not captured by current approximations, such as Online Training Through Time (OTTT) and Online Spatio-Temporal Learning (OSTL). We show improved scaling for multi-layer networks using a novel approximation of temporal effects on the subsequent layer's activity. This approximation incurs minimal overhead in the time and space complexity compared to similar algorithms, and the calculation of temporal effects remains local to each layer. We characterize the learning performance of our proposed algorithms on multiple SNN model configurations for rate-based and time-based encoding. OTPE exhibits the highest directional alignment to exact gradients, calculated with backpropagation through time (BPTT), in deep networks and, on time-based encoding, outperforms other approximate methods. We also observe sizeable gains in average performance over similar algorithms in offline training of Spiking Heidelberg Digits with equivalent hyper-parameters (OTTT/OSTL - 70.5%; OTPE - 75.2%; BPTT - 78.1%).
- Exodus: Stable and efficient training of spiking neural networks. Frontiers in Neuroscience, 17:1110444, 2023.
- Optimal kronecker-sum approximation of real time recurrent learning. In International Conference on Machine Learning, pages 604–613. PMLR, 2019.
- Online spatio-temporal learning in deep neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022.
- JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
- The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 33(7):2744–2757, 2020.
- Training spiking neural networks using lessons from deep learning. Proceedings of the IEEE, 2023.
- Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2661–2671, 2021.
- Eligibility traces and plasticity on behavioral time scales: experimental support of neohebbian three-factor learning rules. Frontiers in neural circuits, 12:53, 2018.
- Flax: A neural network library and ecosystem for JAX, 2023. URL http://github.com/google/flax.
- Synaptic plasticity dynamics for deep continuous local learning (decolle). Frontiers in Neuroscience, 14, 2020. ISSN 1662-453X. doi: 10.3389/fnins.2020.00424. URL https://www.frontiersin.org/articles/10.3389/fnins.2020.00424.
- D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018.
- Random synaptic feedback weights support error backpropagation for deep learning. Nature communications, 7(1):13276, 2016.
- Y. Maeda and M. Wakamura. Simultaneous perturbation learning rule for recurrent neural networks and its fpga implementation. IEEE Transactions on Neural Networks, 16(6):1664–1672, 2005.
- A practical sparse approximation for real time recurrent learning. arXiv preprint arXiv:2006.07232, 2020.
- Approximating real-time recurrent learning with random kronecker factors. Advances in Neural Information Processing Systems, 31, 2018.
- Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6):51–63, 2019.
- S. B. Shrestha and G. Orchard. Slayer: Spike layer error reassignment in time. Advances in neural information processing systems, 31, 2018.
- C. Tallec and Y. Ollivier. Unbiased online recurrent optimization. In International Conference on Learning Representations, 2018.
- R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection science, 1(1):87–111, 1989.
- Online training through time for spiking neural networks. arXiv preprint arXiv:2210.04195, 2022.
- Neurobench: Advancing neuromorphic computing through collaborative, fair and representative benchmarking. arXiv preprint arXiv:2304.04640, 2023.
- F. Zenke and S. Ganguli. Superspike: Supervised learning in multilayer spiking neural networks. Neural computation, 30(6):1514–1541, 2018.
- F. Zenke and T. P. Vogels. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks. Neural computation, 33(4):899–925, 2021.
- Thomas Summe (1 paper)
- Clemens JS Schaefer (10 papers)
- Siddharth Joshi (28 papers)