Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Event-Driven Learning for Spiking Neural Networks (2403.00270v1)

Published 1 Mar 2024 in cs.NE and cs.CV

Abstract: Brain-inspired spiking neural networks (SNNs) have gained prominence in the field of neuromorphic computing owing to their low energy consumption during feedforward inference on neuromorphic hardware. However, it remains an open challenge how to effectively benefit from the sparse event-driven property of SNNs to minimize backpropagation learning costs. In this paper, we conduct a comprehensive examination of the existing event-driven learning algorithms, reveal their limitations, and propose novel solutions to overcome them. Specifically, we introduce two novel event-driven learning methods: the spike-timing-dependent event-driven (STD-ED) and membrane-potential-dependent event-driven (MPD-ED) algorithms. These proposed algorithms leverage precise neuronal spike timing and membrane potential, respectively, for effective learning. The two methods are extensively evaluated on static and neuromorphic datasets to confirm their superior performance. They outperform existing event-driven counterparts by up to 2.51% for STD-ED and 6.79% for MPD-ED on the CIFAR-100 dataset. In addition, we theoretically and experimentally validate the energy efficiency of our methods on neuromorphic hardware. On-chip learning experiments achieved a remarkable 30-fold reduction in energy consumption over time-step-based surrogate gradient methods. The demonstrated efficiency and efficacy of the proposed event-driven learning methods emphasize their potential to significantly advance the fields of neuromorphic computing, offering promising avenues for energy-efficiency applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (83)
  1. L. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. A. Fadhel, M. Al-Amidie, and L. Farhan, “Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions,” Journal of big Data, vol. 8, pp. 1–74, 2021.
  2. S. Dong, P. Wang, and K. Abbas, “A survey on deep learning and its applications,” Computer Science Review, vol. 40, p. 100379, 2021.
  3. J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. Pernice, “All-optical spiking neurosynaptic networks with self-learning capabilities,” Nature, vol. 569, no. 7755, pp. 208–214, 2019.
  4. L. Deng, Y. Wu, X. Hu, L. Liang, Y. Ding, G. Li, G. Zhao, P. Li, and Y. Xie, “Rethinking the performance comparison between snns and anns,” Neural networks, vol. 121, pp. 294–307, 2020.
  5. E. M. Izhikevich, “Simple model of spiking neurons,” IEEE Transactions on neural networks, vol. 14, no. 6, pp. 1569–1572, 2003.
  6. K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature, vol. 575, no. 7784, pp. 607–617, 2019.
  7. F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G.-J. Nam et al., “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE transactions on computer-aided design of integrated circuits and systems, vol. 34, no. 10, pp. 1537–1557, 2015.
  8. J. Pei, L. Deng, S. Song, M. Zhao, Y. Zhang, S. Wu, G. Wang, Z. Zou, Z. Wu, W. He et al., “Towards artificial general intelligence with hybrid tianjic chip architecture,” Nature, vol. 572, no. 7767, pp. 106–111, 2019.
  9. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” Ieee Micro, vol. 38, no. 1, pp. 82–99, 2018.
  10. M. Zhang, J. Wang, J. Wu, A. Belatreche, B. Amornpaisannon, Z. Zhang, V. P. K. Miriyala, H. Qu, Y. Chua, T. E. Carlson et al., “Rectified linear postsynaptic potential function for backpropagation in deep spiking neural networks,” IEEE transactions on neural networks and learning systems, vol. 33, no. 5, pp. 1947–1958, 2021.
  11. W. Wei, M. Zhang, H. Qu, A. Belatreche, J. Zhang, and H. Chen, “Temporal-coded spiking neural networks with dynamic firing threshold: Learning with event-driven backpropagation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 10 552–10 562.
  12. W. Zhang and P. Li, “Spike-train level backpropagation for training deep recurrent spiking neural networks,” Advances in neural information processing systems, vol. 32, 2019.
  13. T. Zhang, Q. Wang, and B. Xu, “Self-lateral propagation elevates synaptic modifications in spiking neural networks for the efficient spatial and temporal classification,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  14. B. Rueckauer, I.-A. Lungu, Y. Hu, and M. Pfeiffer, “Theory and tools for the conversion of analog to spiking convolutional neural networks. arxiv: Statistics,” Machine Learning, vol. 1612, pp. 0–0, 2016.
  15. B. Rueckauer and S.-C. Liu, “Conversion of analog to spiking neural networks using sparse temporal coding,” in 2018 IEEE international symposium on circuits and systems (ISCAS).   IEEE, 2018, pp. 1–5.
  16. B. Han, G. Srinivasan, and K. Roy, “Rmp-snn: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 13 558–13 567.
  17. Y. Wang, M. Zhang, Y. Chen, and H. Qu, “Signed neuron with memory: Towards simple, accurate and high-efficient ann-snn conversion,” in International Joint Conference on Artificial Intelligence, 2022.
  18. E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks,” IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 51–63, 2019.
  19. Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi, “Spatio-temporal backpropagation for training high-performance spiking neural networks,” Frontiers in neuroscience, vol. 12, p. 331, 2018.
  20. Y. Wu, L. Deng, G. Li, J. Zhu, Y. Xie, and L. Shi, “Direct training for spiking neural networks: Faster, larger, better,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 1311–1318.
  21. S. B. Shrestha and G. Orchard, “Slayer: Spike layer error reassignment in time,” Advances in neural information processing systems, vol. 31, 2018.
  22. Y. Zhu, Z. Yu, W. Fang, X. Xie, T. Huang, and T. Masquelier, “Training spiking neural networks with event-driven backpropagation,” in 36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022.
  23. Y. Zhu, W. Fang, X. Xie, T. Huang, and Z. Yu, “Exploring loss functions for time-based training strategy in spiking neural networks,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  24. R. Yin, Y. Li, A. Moitra, and P. Panda, “Mint: Multiplier-less integer quantization for spiking neural networks,” 05 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:265043878
  25. W. Zhang and P. Li, “Temporal spike sequence learning via backpropagation for deep spiking neural networks,” Advances in Neural Information Processing Systems, vol. 33, pp. 12 022–12 033, 2020.
  26. P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer, “Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing,” in 2015 International joint conference on neural networks (IJCNN).   ieee, 2015, pp. 1–8.
  27. B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu, “Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,” Frontiers in neuroscience, vol. 11, p. 682, 2017.
  28. P. U. Diehl, G. Zarrella, A. Cassidy, B. U. Pedroni, and E. Neftci, “Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware,” in 2016 IEEE International Conference on Rebooting Computing (ICRC).   IEEE, 2016, pp. 1–8.
  29. A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy, “Going deeper in spiking neural networks: Vgg and residual architectures,” Frontiers in neuroscience, vol. 13, p. 95, 2019.
  30. T. Bu, J. Ding, Z. Yu, and T. Huang, “Optimized potential initialization for low-latency spiking neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 1, 2022, pp. 11–20.
  31. Y. Li, S. Deng, X. Dong, R. Gong, and S. Gu, “A free lunch from ann: Towards efficient, accurate spiking neural networks calibration,” in International Conference on Machine Learning.   PMLR, 2021, pp. 6316–6325.
  32. J. Wu, Y. Chua, M. Zhang, G. Li, H. Li, and K. C. Tan, “A tandem learning rule for effective training and rapid inference of deep spiking neural networks,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  33. J. Wu, C. Xu, X. Han, D. Zhou, M. Zhang, H. Li, and K. C. Tan, “Progressive tandem learning for pattern recognition with deep spiking neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 7824–7840, 2021.
  34. T. Bu, W. Fang, J. Ding, P. Dai, Z. Yu, and T. Huang, “Optimal ann-snn conversion for high-accuracy and ultra-low-latency spiking neural networks,” arXiv preprint arXiv:2303.04347, 2023.
  35. Z. Hao, J. Ding, T. Bu, T. Huang, and Z. Yu, “Bridging the gap between anns and snns by calibrating offset spikes,” arXiv preprint arXiv:2302.10685, 2023.
  36. Z. Hao, T. Bu, J. Ding, T. Huang, and Z. Yu, “Reducing ann-snn conversion error through residual membrane potential,” arXiv preprint arXiv:2302.02091, 2023.
  37. A. Stanojevic, S. Woźniak, G. Bellec, G. Cherubini, A. Pantazi, and W. Gerstner, “An exact mapping from relu networks to spiking neural networks,” arXiv preprint arXiv:2212.12522, 2022.
  38. S. Park, S. Kim, B. Na, and S. Yoon, “T2fsnn: Deep spiking neural networks with time-to-first-spike coding,” in 2020 57th ACM/IEEE Design Automation Conference (DAC).   IEEE, 2020, pp. 1–6.
  39. B. Han and K. Roy, “Deep spiking neural network: Energy efficiency through time based coding,” in European Conference on Computer Vision.   Springer, 2020, pp. 388–404.
  40. F. Zenke and S. Ganguli, “Superspike: Supervised learning in multilayer spiking neural networks,” Neural computation, vol. 30, no. 6, pp. 1514–1541, 2018.
  41. C. Lee, S. S. Sarwar, P. Panda, G. Srinivasan, and K. Roy, “Enabling spike-based backpropagation for training deep neural network architectures,” Frontiers in neuroscience, p. 119, 2020.
  42. X.-R. Qiu, Z.-R. Wang, Z. Luan, R.-J. Zhu, X. Wu, M.-L. Zhang, and L.-J. Deng, “Vtsnn: a virtual temporal spiking neural network,” Frontiers in neuroscience, vol. 17, p. 1091097, 2023.
  43. X.-R. Qiu, R.-J. Zhu, Y. Chou, Z. Wang, L.-j. Deng, and G. Li, “Gated attention coding for training high-performance and efficient spiking neural networks,” arXiv preprint arXiv:2308.06582, 2023.
  44. W. Fang, Z. Yu, Y. Chen, T. Masquelier, T. Huang, and Y. Tian, “Incorporating learnable membrane time constant to enhance learning of spiking neural networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2661–2671.
  45. Y. Li, Y. Guo, S. Zhang, S. Deng, Y. Hai, and S. Gu, “Differentiable spike: Rethinking gradient-descent for training spiking neural networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 23 426–23 439, 2021.
  46. Y. Chen, S. Zhang, S. Ren, and H. Qu, “Gradual surrogate gradient learning in deep spiking neural networks,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2022, pp. 8927–8931.
  47. S. Deng, Y. Li, S. Zhang, and S. Gu, “Temporal efficient training of spiking neural network via gradient re-weighting,” arXiv preprint arXiv:2202.11946, 2022.
  48. M. Xiao, Q. Meng, Z. Zhang, D. He, and Z. Lin, “Online training through time for spiking neural networks,” Advances in Neural Information Processing Systems, vol. 35, pp. 20 717–20 730, 2022.
  49. N. Perez-Nieves and D. Goodman, “Sparse spiking gradient descent,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 795–11 808, 2021.
  50. Q. Yang, J. Wu, M. Zhang, Y. Chua, X. Wang, and H. Li, “Training spiking neural networks with local tandem learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 12 662–12 676, 2022.
  51. Q. Meng, M. Xiao, S. Yan, Y. Wang, Z. Lin, and Z.-Q. Luo, “Towards memory-and time-efficient backpropagation for training spiking neural networks,” arXiv preprint arXiv:2302.14311, 2023.
  52. M. Yao, G. Zhao, H. Zhang, Y. Hu, L. Deng, Y. Tian, B. Xu, and G. Li, “Attention spiking neural networks,” IEEE transactions on pattern analysis and machine intelligence, 2023.
  53. R.-J. Zhu, Q. Zhao, T. Zhang, H. Deng, Y. Duan, M. Zhang, and L.-J. Deng, “Tcja-snn: Temporal-channel joint attention for spiking neural networks,” arXiv preprint arXiv:2206.10177, 2022.
  54. S. M. Bohte, J. N. Kok, and H. La Poutre, “Error-backpropagation in temporally encoded networks of spiking neurons,” Neurocomputing, vol. 48, no. 1-4, pp. 17–37, 2002.
  55. J. Zhao, J. M. Zurada, J. Yang, and W. Wu, “The convergence analysis of spikeprop algorithm with smoothing l1/ 2 regularization,” Neural Networks, vol. 103, pp. 19–28, 2018.
  56. S. McKennoch, D. Liu, and L. G. Bushnell, “Fast modifications of the spikeprop algorithm,” in The 2006 IEEE International Joint Conference on Neural Network Proceedings.   IEEE, 2006, pp. 3970–3977.
  57. H. Mostafa, “Supervised learning based on temporal coding in spiking neural networks,” IEEE transactions on neural networks and learning systems, vol. 29, no. 7, pp. 3227–3235, 2017.
  58. S. R. Kheradpisheh and T. Masquelier, “Temporal backpropagation for spiking neural networks with one spike per neuron,” International journal of neural systems, vol. 30, no. 06, p. 2050027, 2020.
  59. I.-M. Comşa, K. Potempa, L. Versari, T. Fischbacher, A. Gesmundo, and J. Alakuijala, “Temporal coding in spiking neural networks with alpha synaptic function: learning with backpropagation,” IEEE transactions on neural networks and learning systems, vol. 33, no. 10, pp. 5939–5952, 2021.
  60. A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol. 117, no. 4, p. 500, 1952.
  61. W. Gerstner and J. L. van Hemmen, “Associative memory in a network of ‘spiking’neurons,” Network: Computation in Neural Systems, vol. 3, no. 2, pp. 139–164, 1992.
  62. Y. Xu, X. Zeng, L. Han, and J. Yang, “A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks,” Neural Networks, vol. 43, pp. 99–113, 2013.
  63. J. H. Lee, T. Delbruck, and M. Pfeiffer, “Training deep spiking neural networks using backpropagation,” Frontiers in neuroscience, vol. 10, p. 508, 2016.
  64. Y. Guo, Y. Chen, L. Zhang, X. Liu, Y. Wang, X. Huang, and Z. Ma, “Im-loss: information maximization loss for spiking neural networks,” Advances in Neural Information Processing Systems, vol. 35, pp. 156–166, 2022.
  65. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
  66. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  67. G. Orchard, A. Jayawant, G. K. Cohen, and N. Thakor, “Converting static image datasets to spiking neuromorphic datasets using saccades,” Frontiers in neuroscience, vol. 9, p. 437, 2015.
  68. A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza et al., “A low power, fully event-based gesture recognition system,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7243–7252.
  69. H. Li, H. Liu, X. Ji, G. Li, and L. Shi, “Cifar10-dvs: an event-stream dataset for object classification,” Frontiers in neuroscience, vol. 11, p. 309, 2017.
  70. Y. Li, Y. Kim, H. Park, T. Geller, and P. Panda, “Neuromorphic data augmentation for training spiking neural networks,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII.   Springer, 2022, pp. 631–649.
  71. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  72. W. Fang, Z. Yu, Y. Chen, T. Huang, T. Masquelier, and Y. Tian, “Deep residual learning in spiking neural networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 21 056–21 069, 2021.
  73. Y. Hu, L. Deng, Y. Wu, M. Yao, and G. Li, “Advancing spiking neural networks towards deep residual learning,” arXiv preprint arXiv:2112.08954, 2021.
  74. S. R. Kheradpisheh, M. Mirsadeghi, and T. Masquelier, “Bs4nn: binarized spiking neural networks with temporal coding and learning,” Neural Processing Letters, vol. 54, no. 2, pp. 1255–1273, 2022.
  75. S. Park and S. Yoon, “Training energy-efficient deep spiking neural networks with time-to-first-spike coding,” arXiv preprint arXiv:2106.02568, 2021.
  76. Y. Guo, Y. Zhang, Y. Chen, W. Peng, X. Liu, L. Zhang, X. Huang, and Z. Ma, “Membrane potential batch normalization for spiking neural networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 19 420–19 430.
  77. J. Wang, Z. Song, Y. Wang, J. Xiao, Y. Yang, S. Mei, and Z. Zhang, “Ssf: Accelerating training of spiking neural networks with stabilized spiking flow,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 5982–5991.
  78. J. Zhang, D. Huo, J. Zhang, C. Qian, Q. Liu, L. Pan, Z. Wang, N. Qiao, K.-T. Tang, and H. Chen, “22.6 anp-i: A 28nm 1.5 pj/sop asynchronous spiking neural network processor enabling sub-o. 1 μ𝜇\muitalic_μj/sample on-chip learning for edge-ai applications,” in 2023 IEEE International Solid-State Circuits Conference (ISSCC).   IEEE, 2023, pp. 21–23.
  79. S. Y. Gordleeva, S. A. Lobov, N. A. Grigorev, A. O. Savosenkov, M. O. Shamshin, M. V. Lukoyanov, M. A. Khoruzhko, and V. B. Kazantsev, “Real-time eeg–emg human–machine interface-based control system for a lower-limb exoskeleton,” IEEE Access, vol. 8, pp. 84 070–84 081, 2020.
  80. J. Wu, L. Sun, and R. Jafari, “A wearable system for recognizing american sign language in real-time using imu and surface emg sensors,” IEEE journal of biomedical and health informatics, vol. 20, no. 5, pp. 1281–1290, 2016.
  81. O. Faust, Y. Hagiwara, T. J. Hong, O. S. Lih, and U. R. Acharya, “Deep learning for healthcare applications based on physiological signals: A review,” Computer methods and programs in biomedicine, vol. 161, pp. 1–13, 2018.
  82. L. McManus, G. De Vito, and M. M. Lowery, “Analysis and biophysics of surface emg for physiotherapists and kinesiologists: Toward a common language with rehabilitation engineers,” Frontiers in neurology, vol. 11, p. 576729, 2020.
  83. E. Ceolini, C. Frenkel, S. B. Shrestha, G. Taverni, L. Khacef, M. Payvand, and E. Donati, “Hand-gesture recognition based on emg and event-based camera sensor fusion: A benchmark in neuromorphic computing,” Frontiers in neuroscience, vol. 14, p. 637, 2020.
Citations (14)

Summary

We haven't generated a summary for this paper yet.