Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

One-Spike SNN: Single-Spike Phase Coding with Base Manipulation for ANN-to-SNN Conversion Loss Minimization (2403.08786v1)

Published 30 Jan 2024 in cs.NE and cs.AI

Abstract: As spiking neural networks (SNNs) are event-driven, energy efficiency is higher than conventional artificial neural networks (ANNs). Since SNN delivers data through discrete spikes, it is difficult to use gradient methods for training, limiting its accuracy. To keep the accuracy of SNNs similar to ANN counterparts, pre-trained ANNs are converted to SNNs (ANN-to-SNN conversion). During the conversion, encoding activations of ANNs to a set of spikes in SNNs is crucial for minimizing the conversion loss. In this work, we propose a single-spike phase coding as an encoding scheme that minimizes the number of spikes to transfer data between SNN layers. To minimize the encoding error due to single-spike approximation in phase coding, threshold shift and base manipulation are proposed. Without any additional retraining or architectural constraints on ANNs, the proposed conversion method does not lose inference accuracy (0.58% on average) verified on three convolutional neural networks (CNNs) with CIFAR and ImageNet datasets.In addition, graph convolutional networks (GCNs) are converted to SNNs successfully with an average accuracy loss of 0.90%.Most importantly, the energy efficiency of our SNN improves by 4.6~17.3 X compared to the ANN baseline.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of NeurIPS, 2012.
  2. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” in Proc. of NeurIPS, vol. 33, 2020, pp. 1877–1901.
  3. J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in Proc. of ICML, 2011.
  4. D. Li, X. Chen, M. Becchi, and Z. Zong, “Evaluating the energy efficiency of deep convolutional neural networks on cpus and gpus,” in Proc. of BDCloud.   IEEE, 2016, pp. 477–484.
  5. K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature, vol. 575, no. 7784, pp. 607–617, 2019.
  6. E. M. Izhikevich, “Which model to use for cortical spiking neurons?” IEEE Transactions on Neural Networks, vol. 15, no. 5, pp. 1063–1070, Sept. 2004.
  7. D. A. McCormick, Y. Shu, and Y. Yu, “Hodgkin and Huxley model — still standing?” Nature, vol. 445, no. E1-E2, Jan. 2007.
  8. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, Y. Liao, C.-K. Lin, A. Lines, R. Liu, D. Mathaikutty, S. McCoy, A. Paul, J. Tse, G. Venkataramanan, Y.-H. Weng, A. Wild, Y. Yang, and H. Wang, “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018.
  9. F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G.-J. Nam, B. Taba, M. Beakes, B. Brezzo, J. B. Kuang, R. Manohar, W. P. Risk, B. Jackson, and D. S. Modha, “Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 10, pp. 1537–1557, 2015.
  10. S. B. Furber, D. R. Lester, L. A. Plana, J. D. Garside, E. Painkras, S. Temple, and A. D. Brown, “Overview of the spinnaker system architecture,” IEEE Transactions on Computers, vol. 62, no. 12, pp. 2454–2467, 2013.
  11. H. Zheng, Y. Wu, L. Deng, Y. Hu, and G. Li, “Going deeper with directly-trained larger spiking neural networks,” in Proc. of AAAI, 2021.
  12. W. Fang, Z. Yu, Y. Chen, T. Huang, T. Masquelier, and Y. Tian, “Deep residual learning in spiking neural networks,” in Proc. of NeurIPS, 2021.
  13. P. U. Diehl and M. Cook, “Unsupervised learning of digit recognition using spike-timing-dependent plasticity,” Frontiers in Computational Neuroscience, vol. 9, p. 99, Aug. 2015.
  14. G. Srinivasan, P. Panda, and K. Roy, “SpiLinC: spiking liquid-ensemble computing for unsupervised speech and image recognition,” Frontiers in Neuroscience, vol. 12, p. 524, 2018.
  15. S. Hwang, J. Lee, and J. Kung, “Adaptive input-to-neuron interlink development in training of spike-based liquid state machines,” in Proc. of ISCAS, 2021, pp. 1–5.
  16. B. Han, G. Srinivasan, and K. Roy, “Rmp-snn: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network,” in Proc. of CVPR, 2020, pp. 13 555–13 564.
  17. B. Han and K. Roy, “Deep spiking neural network: Energy efficiency through time based coding,” in Proc. of ECCV.   Springer, 2020, pp. 388–404.
  18. S. Deng and S. Gu, “Optimal conversion of conventional artificial neural networks to spiking neural networks,” in Proc. of ICLR, 2020.
  19. F. Liu, W. Zhao, Y. Chen, Z. Wang, and L. Jiang, “Spikeconverter: An efficient conversion framework zipping the gap between artificial neural networks and spiking neural networks,” in Proc. of AAAI, 2022.
  20. J. Kim, H. Kim, S. Huh, J. Lee, and K. Choi, “Deep neural networks with weighted spikes,” Neurocomputing, vol. 311, pp. 373–386, 2018.
  21. B. Rueckauer and S.-C. Liu, “Temporal pattern coding in deep spiking neural networks,” in Proc. of IJCNN, 2021.
  22. N. Rathi and K. Roy, “Diet-snn: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  23. S. S. Chowdhury, N. Rathi, and K. Roy, “Towards ultra low latency spiking neural networks for vision and sequential tasks using temporal pruning,” in Proc. of ECCV, 2022.
  24. T. Bu, W. Fang, J. Ding, P. Dai, Z. Yu, and T. Huang, “Optimal ann-snn conversion for high-accuracy and ultra-low-latency spiking neural networks,” in Proc. of ICLR, 2021.
  25. J. Ding, Z. Yu, Y. Tian, and T. Huang, “Optimal ann-snn conversion for fast and accurate inference in deep spiking neural networks,” in Proc. of IJCAI, 2021.
  26. P. Diehl, D. Neil, J. Binas, M. Cook, S. Liu, and M. Pfeiffer, “Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing,” in Proc. of IJCNN, 2015.
  27. B. Rueckauer and S.-C. Liu, “Conversion of analog to spiking neural networks using sparse temporal coding,” in Proc. of ISCAS.   Institute of Electrical and Electronics Engineers, 2018, pp. 1–5.
  28. S. Park, S. Kim, B. Na, and S. Yoon, “T2fsnn: deep spiking neural networks with time-to-first-spike coding,” in Proc. of DAC, 2020, pp. 1–6.
  29. M. Zhang, Z. Gu, N. Zheng, D. Ma, and G. Pan, “Efficient spiking neural networks with logarithmic temporal coding,” IEEE Access, vol. 8, pp. 98 156–98 167, 2020.
  30. S. Kim, S. Park, B. Na, and S. Yoon, “Spiking-yolo: spiking neural network for energy-efficient object detection,” in Proc. of AAAI, 2020.
  31. M. Horowitz, “1.1 computing’s energy problem (and what we can do about it),” in Proc. of ISSCC, 2014.
  32. D. Lew, K. Lee, and J. Park, “A time-to-first-spike coding and conversion aware training for energy-efficient deep spiking neural network processor design,” in Proc. of DAC, 2022, pp. 265–270.
  33. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. of ICLR, 2017.
  34. L. Yao, C. Mao, and Y. Luo, “Graph convolutional networks for text classification,” in Proc. of AAAI, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Sangwoo Hwang (1 paper)
  2. Jaeha Kung (7 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com