Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ternary Spike: Learning Ternary Spikes for Spiking Neural Networks (2312.06372v2)

Published 11 Dec 2023 in cs.CV

Abstract: The Spiking Neural Network (SNN), as one of the biologically inspired neural network infrastructures, has drawn increasing attention recently. It adopts binary spike activations to transmit information, thus the multiplications of activations and weights can be substituted by additions, which brings high energy efficiency. However, in the paper, we theoretically and experimentally prove that the binary spike activation map cannot carry enough information, thus causing information loss and resulting in accuracy decreasing. To handle the problem, we propose a ternary spike neuron to transmit information. The ternary spike neuron can also enjoy the event-driven and multiplication-free operation advantages of the binary spike neuron but will boost the information capacity. Furthermore, we also embed a trainable factor in the ternary spike neuron to learn the suitable spike amplitude, thus our SNN will adopt different spike amplitudes along layers, which can better suit the phenomenon that the membrane potential distributions are different along layers. To retain the efficiency of the vanilla ternary spike, the trainable ternary spike SNN will be converted to a standard one again via a re-parameterization technique in the inference. Extensive experiments with several popular network structures over static and dynamic datasets show that the ternary spike can consistently outperform state-of-the-art methods. Our code is open-sourced at https://github.com/yfguo91/Ternary-Spike.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE transactions on computer-aided design of integrated circuits and systems, 34(10): 1537–1557.
  2. Simple online and realtime tracking. In 2016 IEEE international conference on image processing (ICIP), 3464–3468. IEEE.
  3. Online Spatio-Temporal Learning in Deep Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 1–15.
  4. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro, 38(1): 82–99.
  5. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255. Ieee.
  6. Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. arXiv preprint arXiv:2202.11946.
  7. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805.
  8. Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems, 34: 21056–21069.
  9. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2661–2671.
  10. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4852–4861.
  11. IM-Loss: Information Maximization Loss for Spiking Neural Networks. In Oh, A. H.; Agarwal, A.; Belgrave, D.; and Cho, K., eds., Advances in Neural Information Processing Systems.
  12. Reducing Information Loss for Spiking Neural Networks. In Avidan, S.; Brostow, G.; Cissé, M.; Farinella, G. M.; and Hassner, T., eds., Computer Vision – ECCV 2022, 36–52. Cham: Springer Nature Switzerland. ISBN 978-3-031-20083-0.
  13. Direct learning-based deep spiking neural networks: a review. Frontiers in Neuroscience, 17: 1209795.
  14. RMP-Loss: Regularizing Membrane Potential Distribution for Spiking Neural Networks. arXiv preprint arXiv:2308.06787.
  15. Joint A-SNN: Joint Training of Artificial and Spiking Neural Networks via Self-Distillation and Weight Factorization. Pattern Recognition, 109639.
  16. RecDis-SNN: Rectifying Membrane Potential Distribution for Directly Training Spiking Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 326–335.
  17. A new mesh smoothing method based on a neural network. Computational Mechanics, 1–14.
  18. Real spike: Learning real-valued spikes for spiking neural networks. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XII, 52–68. Springer.
  19. Membrane Potential Batch Normalization for Spiking Neural Networks. arXiv preprint arXiv:2308.08359.
  20. Deep spiking neural network: Energy efficiency through time based coding. In European Conference on Computer Vision, 388–404. Springer.
  21. Rmp-snn: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 13558–13567.
  22. Reducing ANN-SNN Conversion Error through Residual Membrane Potential. arXiv:2302.02091.
  23. Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes. arXiv:2302.10685.
  24. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
  25. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE international conference on computer vision, 1389–1397.
  26. Spiking deep residual networks. IEEE Transactions on Neural Networks and Learning Systems.
  27. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  28. Spiking-yolo: spiking neural network for energy-efficient object detection. In Proceedings of the AAAI conference on artificial intelligence, volume 34, 11270–11277.
  29. Cifar-10 (canadian institute for advanced research). URL http://www. cs. toronto. edu/kriz/cifar. html, 5(4): 1.
  30. Cifar10-dvs: an event-stream dataset for object classification. Frontiers in neuroscience, 11: 309.
  31. A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. In International Conference on Machine Learning, 6316–6325. PMLR.
  32. Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. arXiv preprint arXiv:1909.13144.
  33. Differentiable spike: Rethinking gradient-descent for training spiking neural networks. Advances in Neural Information Processing Systems, 34: 23426–23439.
  34. Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes. arXiv preprint arXiv:2204.13271.
  35. SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, 1692–1701. AAAI Press.
  36. Darwin: A neuromorphic hardware co-processor based on spiking neural networks. Journal of Systems Architecture, 77: 43–51.
  37. Training high-performance low-latency spiking neural networks by differentiation on spike representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12444–12453.
  38. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6): 51–63.
  39. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 572(7767): 106–111.
  40. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668.
  41. Diet-snn: Direct input encoding with leakage and threshold optimization in deep spiking neural networks. arXiv preprint arXiv:2008.03658.
  42. Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation. arXiv preprint arXiv:2005.01807.
  43. Spiking PointNet: Spiking Neural Networks for Point Clouds. arXiv preprint arXiv:2310.06232.
  44. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, 234–241. Springer.
  45. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in neuroscience, 13: 95.
  46. ESL-SNNs: An evolutionary structure learning strategy for spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 86–93.
  47. Mesoscopic modeling of hidden spiking neurons. In Oh, A. H.; Agarwal, A.; Belgrave, D.; and Cho, K., eds., Advances in Neural Information Processing Systems.
  48. MT-SNN: Enhance Spiking Neural Network with Multiple Thresholds. arXiv:2303.11127.
  49. A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems.
  50. A tandem learning rule for effective training and rapid inference of deep spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems.
  51. Progressive tandem learning for pattern recognition with deep spiking neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 7824–7840.
  52. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in neuroscience, 12: 331.
  53. Direct training for spiking neural networks: Faster, larger, better. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 1311–1318.
  54. Online Training Through Time for Spiking Neural Networks. In Oh, A. H.; Agarwal, A.; Belgrave, D.; and Cho, K., eds., Advances in Neural Information Processing Systems.
  55. Training feedback spiking neural networks by implicit differentiation on the equilibrium state. Advances in Neural Information Processing Systems, 34: 14516–14528.
  56. Constructing deep spiking neural networks from artificial neural networks with knowledge distillation. arXiv preprint arXiv:2304.05627.
  57. Hierarchical spiking-based model for efficient image classification with enhanced feature extraction and encoding. IEEE Transactions on Neural Networks and Learning Systems.
  58. Robust transcoding sensory information with neural spikes. IEEE Transactions on Neural Networks and Learning Systems, 33(5): 1935–1946.
  59. Training Spiking Neural Networks with Local Tandem Learning. arXiv preprint arXiv:2210.04532.
  60. GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks. arXiv preprint arXiv:2210.13768.
  61. Constructing Accurate and Efficient Deep Spiking Neural Networks With Double-Threshold and Augmented Schemes. IEEE Transactions on Neural Networks and Learning Systems, PP(99): 1–13.
  62. Self-Distillation: Towards Efficient and Compact Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4388–4403.
  63. Temporal spike sequence learning via backpropagation for deep spiking neural networks. Advances in Neural Information Processing Systems, 33: 12022–12033.
  64. Going deeper with directly-trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 11062–11070.
Citations (12)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com