Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spikformer V2: Join the High Accuracy Club on ImageNet with an SNN Ticket (2401.02020v1)

Published 4 Jan 2024 in cs.NE, cs.CV, and cs.LG

Abstract: Spiking Neural Networks (SNNs), known for their biologically plausible architecture, face the challenge of limited performance. The self-attention mechanism, which is the cornerstone of the high-performance Transformer and also a biologically inspired structure, is absent in existing SNNs. To this end, we explore the potential of leveraging both self-attention capability and biological properties of SNNs, and propose a novel Spiking Self-Attention (SSA) and Spiking Transformer (Spikformer). The SSA mechanism eliminates the need for softmax and captures the sparse visual feature employing spike-based Query, Key, and Value. This sparse computation without multiplication makes SSA efficient and energy-saving. Further, we develop a Spiking Convolutional Stem (SCS) with supplementary convolutional layers to enhance the architecture of Spikformer. The Spikformer enhanced with the SCS is referred to as Spikformer V2. To train larger and deeper Spikformer V2, we introduce a pioneering exploration of Self-Supervised Learning (SSL) within the SNN. Specifically, we pre-train Spikformer V2 with masking and reconstruction style inspired by the mainstream self-supervised Transformer, and then finetune the Spikformer V2 on the image classification on ImageNet. Extensive experiments show that Spikformer V2 outperforms other previous surrogate training and ANN2SNN methods. An 8-layer Spikformer V2 achieves an accuracy of 80.38% using 4 time steps, and after SSL, a 172M 16-layer Spikformer V2 reaches an accuracy of 81.10% with just 1 time step. To the best of our knowledge, this is the first time that the SNN achieves 80+% accuracy on ImageNet. The code will be available at Spikformer V2.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (104)
  1. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  2. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  3. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
  4. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representa- tions (ICLR), 2020.
  5. K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000–16 009.
  6. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  7. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu et al., “Rt-1: Robotics transformer for real-world control at scale,” arXiv preprint arXiv:2212.06817, 2022.
  8. W. Maass, “Networks of spiking neurons: the third generation of neural network models,” Neural networks, vol. 10, no. 9, pp. 1659–1671, 1997.
  9. P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura et al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, vol. 345, no. 6197, pp. 668–673, 2014.
  10. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” Ieee Micro, vol. 38, no. 1, pp. 82–99, 2018.
  11. J. Pei, L. Deng, S. Song, M. Zhao, Y. Zhang, S. Wu, G. Wang, Z. Zou, Z. Wu, W. He et al., “Towards artificial general intelligence with hybrid tianjic chip architecture,” Nature, vol. 572, no. 7767, pp. 106–111, 2019.
  12. K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic computing,” Nature, vol. 575, no. 7784, pp. 607–617, 2019.
  13. N. Perez-Nieves and D. Goodman, “Sparse spiking gradient descent,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34.   Curran Associates, Inc., 2021, pp. 11 795–11 808. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2021/file/61f2585b0ebcf1f532c4d1ec9a7d51aa-Paper.pdf
  14. W. Fang, Z. Yu, Y. Chen, T. Huang, T. Masquelier, and Y. Tian, “Deep Residual Learning in Spiking Neural Networks,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 21 056–21 069.
  15. Y. Hu, Y. Wu, L. Deng, and G. Li, “Advancing residual learning towards powerful deep spiking neural networks,” arXiv preprint arXiv:2112.08954, 2021.
  16. M. Yao, H. Gao, G. Zhao, D. Wang, Y. Lin, Z. Yang, and G. Li, “Temporal-wise attention spiking neural networks for event streams classification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10 221–10 230.
  17. W. Fang, Z. Yu, Y. Chen, T. Masquelier, T. Huang, and Y. Tian, “Incorporating learnable membrane time constant to enhance learning of spiking neural networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 2661–2671.
  18. X. Cheng, T. Zhang, S. Jia, and B. Xu, “Meta neurons improve spiking neural networks for efficient spatio-temporal learning,” Neurocomputing, vol. 531, pp. 217–225, 2023.
  19. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 248–255.
  20. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=YicbFdNTTy
  21. H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jegou, “Training data-efficient image transformers & distillation through attention,” in Proceedings of the International Conference on Machine Learning (ICML), vol. 139, 2021, pp. 10 347–10 357.
  22. L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, Z.-H. Jiang, F. E. Tay, J. Feng, and S. Yan, “Tokens-to-token vit: Training vision transformers from scratch on imagenet,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 558–567.
  23. L. Yuan, Q. Hou, Z. Jiang, J. Feng, and S. Yan, “Volo: Vision outlooker for visual recognition,” arXiv preprint arXiv:2106.13112, 2021.
  24. W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 568–578.
  25. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10 012–10 022.
  26. H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 12 299–12 310.
  27. X. Chen, C. Liang, D. Huang, E. Real, K. Wang, Y. Liu, H. Pham, X. Dong, T. Luong, C.-J. Hsieh et al., “Symbolic discovery of optimization algorithms,” arXiv preprint arXiv:2302.06675, 2023.
  28. H. Zheng, Y. Wu, L. Deng, Y. Hu, and G. Li, “Going Deeper With Directly-Trained Larger Spiking Neural Networks,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021, pp. 11 062–11 070.
  29. C. Zhou, H. Zhang, Z. Zhou, L. Yu, Z. Ma, H. Zhou, X. Fan, and Y. Tian, “Enhancing the performance of transformer-based spiking neural networks by improved downsampling with precise gradient backpropagation,” arXiv preprint arXiv:2305.05954, 2023.
  30. S. Kundu, M. Pedram, and P. A. Beerel, “Hire-snn: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 5209–5218.
  31. Y. Hu, H. Tang, and G. Pan, “Spiking deep residual networks,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–6, 2021.
  32. Z. Zhou, Y. Zhu, C. He, Y. Wang, S. YAN, Y. Tian, and L. Yuan, “Spikformer: When spiking neural network meets transformer,” in The Eleventh International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=frE4fUwz_h
  33. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 30, 2017.
  34. Z. Qin, W. Sun, H. Deng, D. Li, Y. Wei, B. Lv, J. Yan, L. Kong, and Y. Zhong, “cosformer: Rethinking softmax in attention,” arXiv preprint arXiv:2202.08791, 2022.
  35. K. Choromanski, V. Likhosherstov, D. Dohan, X. Song, A. Gane, T. Sarlos, P. Hawkins, J. Davis, A. Mohiuddin, L. Kaiser et al., “Rethinking attention with performers,” arXiv preprint arXiv:2009.14794, 2020.
  36. T. Xiao, M. Singh, E. Mintun, T. Darrell, P. Dollár, and R. Girshick, “Early convolutions help transformers see better,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 30 392–30 400.
  37. H. Bao, L. Dong, S. Piao, and F. Wei, “BEit: BERT pre-training of image transformers,” in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=p-BhZSz59o4
  38. C. Zhuang, S. Yan, A. Nayebi, M. Schrimpf, M. C. Frank, J. J. DiCarlo, and D. L. Yamins, “Unsupervised neural network models of the ventral visual stream,” Proceedings of the National Academy of Sciences, vol. 118, no. 3, p. e2014196118, 2021.
  39. C. Zhuang, V. Xiang, Y. Bai, X. Jia, N. Turk-Browne, K. Norman, J. J. DiCarlo, and D. L. Yamins, “How well do unsupervised learning algorithms model human real-time and life-long learning?” in Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
  40. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in Proceedings of the European Conference on Computer Vision (ECCV).   Springer, 2020, pp. 213–229.
  41. X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable detr: Deformable transformers for end-to-end object detection,” arXiv preprint arXiv:2010.04159, 2020.
  42. A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret, “Transformers are rnns: Fast autoregressive transformers with linear attention,” in Proceedings of the 37th International Conference on Machine Learning (ICML), 2020, pp. 5156–5165.
  43. A. Hassani, S. Walton, N. Shah, A. Abuduweili, J. Li, and H. Shi, “Escaping the big data paradigm with compact transformers,” arXiv preprint arXiv:2104.05704, 2021.
  44. W. Wang, E. Xie, X. Li, D. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” CoRR, vol. abs/2102.12122, 2021.
  45. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” CoRR, vol. abs/2103.14030, 2021.
  46. Y. Li, C. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer, “Mvitv2: Improved multiscale vision transformers for classification and detection,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4794–4804, 2021.
  47. J.-g. Song, “Ufo-vit: High performance linear vision transformer without softmax,” arXiv preprint arXiv:2109.14382, 2021.
  48. J. Yang, C. Li, P. Zhang, X. Dai, B. Xiao, L. Yuan, and J. Gao, “Focal attention for long-range interactions in vision transformers,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 30 008–30 022.
  49. Y. Rao, W. Zhao, B. Liu, J. Lu, J. Zhou, and C.-J. Hsieh, “Dynamicvit: Efficient vision transformers with dynamic token sparsification,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 13 937–13 949.
  50. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning.   PMLR, 2020, pp. 1597–1607.
  51. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
  52. M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 9650–9660.
  53. X. Chen, S. Xie, and K. He, “An empirical study of training self-supervised vision transformers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 9640–9649.
  54. Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi, “Spatio-temporal backpropagation for training high-performance spiking neural networks,” Frontiers in neuroscience, vol. 12, p. 331, 2018.
  55. Y. Cao, Y. Chen, and D. Khosla, “Spiking deep convolutional neural networks for energy-efficient object recognition,” International Journal of Computer Vision, vol. 113, no. 1, pp. 54–66, 2015.
  56. E. Hunsberger and C. Eliasmith, “Spiking deep networks with lif neurons,” arXiv preprint arXiv:1510.08829, 2015.
  57. B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu, “Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,” Frontiers in neuroscience, vol. 11, p. 682, 2017.
  58. J. Wu, C. Xu, X. Han, D. Zhou, M. Zhang, H. Li, and K. C. Tan, “Progressive tandem learning for pattern recognition with deep spiking neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 7824–7840, 2021.
  59. Q. Meng, M. Xiao, S. Yan, Y. Wang, Z. Lin, and Z.-Q. Luo, “Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation,” ArXiv preprint arXiv:2205.00459, 2022.
  60. Y. Wang, M. Zhang, Y. Chen, and H. Qu, “Signed neuron with memory: Towards simple, accurate and high-efficient ann-snn conversion,” in International Joint Conference on Artificial Intelligence, 2022.
  61. B. Han, G. Srinivasan, and K. Roy, “Rmp-snn: Residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13 558–13 567.
  62. R.-J. Zhu, Q. Zhao, and J. K. Eshraghian, “Spikegpt: Generative pre-trained language model with spiking neural networks,” arXiv preprint arXiv:2302.13939, 2023.
  63. J. H. Lee, T. Delbruck, and M. Pfeiffer, “Training deep spiking neural networks using backpropagation,” Frontiers in neuroscience, vol. 10, p. 508, 2016.
  64. S. B. Shrestha and G. Orchard, “Slayer: Spike layer error reassignment in time,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 31, 2018.
  65. C. Lee, S. S. Sarwar, P. Panda, G. Srinivasan, and K. Roy, “Enabling spike-based backpropagation for training deep neural network architectures,” Frontiers in neuroscience, vol. 14, p. 119, 2020.
  66. E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks,” IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 51–63, 2019.
  67. M. Xiao, Q. Meng, Z. Zhang, Y. Wang, and Z. Lin, “Training feedback spiking neural networks by implicit differentiation on the equilibrium state,” vol. 34, 2021, pp. 14 516–14 528.
  68. J. Zhang, B. Dong, H. Zhang, J. Ding, F. Heide, B. Yin, and X. Yang, “Spiking transformers for event-based single object tracking,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 8801–8810.
  69. J. Zhang, L. Tang, Z. Yu, J. Lu, and T. Huang, “Spike transformer: Monocular depth estimation for spiking camera,” in Proceedings of the European Conference on Computer Vision (ECCV), 2022.
  70. E. Mueller, V. Studenyak, D. Auge, and A. Knoll, “Spiking transformer networks: A rate coded approach for processing sequential data,” in 2021 7th International Conference on Systems and Informatics (ICSAI).   IEEE, 2021, pp. 1–5.
  71. X. Chu, Z. Tian, Y. Wang, B. Zhang, H. Ren, X. Wei, H. Xia, and C. Shen, “Twins: Revisiting the design of spatial attention in vision transformers,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 9355–9366.
  72. H. Li, H. Liu, X. Ji, G. Li, and L. Shi, “Cifar10-dvs: an event-stream dataset for object classification,” Frontiers in neuroscience, vol. 11, p. 309, 2017.
  73. A. Krizhevsky, “Learning multiple layers of features from tiny images,” 2009.
  74. P. Gao, T. Ma, H. Li, Z. Lin, J. Dai, and Y. Qiao, “Mcmae: Masked convolution meets masked autoencoders,” Advances in Neural Information Processing Systems, vol. 35, pp. 35 632–35 644, 2022.
  75. K. Tian, Y. Jiang, qishuai diao, C. Lin, L. Wang, and Z. Yuan, “Designing BERT for convolutional networks: Sparse and hierarchical masked modeling,” in The Eleventh International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=NRxydtWup1S
  76. X. Zhang, Y. Tian, L. Xie, W. Huang, Q. Dai, Q. Ye, and Q. Tian, “Hivit: A simpler and more efficient design of hierarchical vision transformer,” in The Eleventh International Conference on Learning Representations, 2023.
  77. D. Zhou, B. Kang, X. Jin, L. Yang, X. Lian, Z. Jiang, Q. Hou, and J. Feng, “Deepvit: Towards deeper vision transformer,” 2021.
  78. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning transferable visual models from natural language supervision,” in Proceedings of the 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. Meila and T. Zhang, Eds., vol. 139.   PMLR, 18–24 Jul 2021, pp. 8748–8763.
  79. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. Girshick, “Segment anything,” arXiv:2304.02643, 2023.
  80. S. Bakhtiari, P. Mineault, T. Lillicrap, C. Pack, and B. Richards, “The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 25 164–25 178, 2021.
  81. J. Millet, C. Caucheteux, Y. Boubenec, A. Gramfort, E. Dunbar, C. Pallier, J.-R. King et al., “Toward a realistic model of speech processing in the brain with self-supervised learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 33 428–33 443, 2022.
  82. N. Rathi, G. Srinivasan, P. Panda, and K. Roy, “Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation,” arXiv preprint arXiv:2005.01807, 2020.
  83. T. Bu, W. Fang, J. Ding, P. DAI, Z. Yu, and T. Huang, “Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks,” in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=7B3IJMM1k_M
  84. Y. Hu, Q. Zheng, X. Jiang, and G. Pan, “Fast-snn: Fast spiking neural network by converting quantized ann,” arXiv preprint arXiv:2305.19868, 2023.
  85. Z. Hao, J. Ding, T. Bu, T. Huang, and Z. Yu, “Bridging the gap between ANNs and SNNs by calibrating offset spikes,” in The Eleventh International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=PFbzoWZyZRX
  86. S. Deng, Y. Li, S. Zhang, and S. Gu, “Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting,” in International Conference on Learning Representations (ICLR), 2021.
  87. M. Yao, G. Zhao, H. Zhang, Y. Hu, L. Deng, Y. Tian, B. Xu, and G. Li, “Attention spiking neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  88. M. Yao, J. Hu, Z. Zhou, L. Yuan, Y. Tian, B. Xu, and G. Li, “Spike-driven transformer,” arXiv preprint arXiv:2307.01694, 2023.
  89. C. Zhou, L. Yu, Z. Zhou, H. Zhang, Z. Ma, H. Zhou, and Y. Tian, “Spikingformer: Spike-driven residual learning for transformer-based spiking neural network,” arXiv preprint arXiv:2304.11954, 2023.
  90. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations, ICLR, 2015.
  91. E. D. Cubuk, B. Zoph, J. Shlens, and Q. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in Neural Information Processing Systems, NeurIPS, 2020.
  92. Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” in Association for the Advancement of Artificial Intelligence, AAAI, 2020, pp. 13 001–13 008.
  93. G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger, “Deep networks with stochastic depth,” in European Conference on Computer Vision, ECCV, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., vol. 9908, 2016, pp. 646–661.
  94. H. Zhang, M. Cissé, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” in International Conference on Learning Representations, ICLR, 2018.
  95. S. Yun, D. Han, S. Chun, S. J. Oh, Y. Yoo, and J. Choe, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in International Conference on Computer Vision, ICCV, 2019, pp. 6022–6031.
  96. N. Rathi and K. Roy, “Diet-snn: Direct input encoding with leakage and threshold optimization in deep spiking neural networks,” arXiv preprint arXiv:2008.03658, 2020.
  97. Y. Wu, L. Deng, G. Li, J. Zhu, Y. Xie, and L. Shi, “Direct Training for Spiking Neural Networks: Faster, Larger, Better,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2019, pp. 1311–1318.
  98. W. Zhang and P. Li, “Temporal spike sequence learning via backpropagation for deep spiking neural networks,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 33, 2020, pp. 12 022–12 033.
  99. Y. Li, Y. Kim, H. Park, T. Geller, and P. Panda, “Neuromorphic data augmentation for training spiking neural networks,” in European Conference on Computer Vision.   Springer, 2022, pp. 631–649.
  100. Z. Wu, H. Zhang, Y. Lin, G. Li, M. Wang, and Y. Tang, “LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2021.
  101. A. Kugele, T. Pfeil, M. Pfeiffer, and E. Chicca, “Efficient Processing of Spatio-temporal Data Streams with Spiking Neural Networks,” Frontiers in Neuroscience, vol. 14, p. 439, 2020.
  102. J. Kaiser, H. Mostafa, and E. Neftci, “Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE),” Frontiers in Neuroscience, vol. 14, p. 424, 2020.
  103. Y. Li, Y. Guo, S. Zhang, S. Deng, Y. Hai, and S. Gu, “Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks,” in Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 23 426–23 439.
  104. Y. Kim and P. Panda, “Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing,” Neural Networks, vol. 144, pp. 686–698, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhaokun Zhou (22 papers)
  2. Kaiwei Che (8 papers)
  3. Wei Fang (98 papers)
  4. Keyu Tian (6 papers)
  5. Yuesheng Zhu (30 papers)
  6. Shuicheng Yan (275 papers)
  7. Yonghong Tian (184 papers)
  8. Li Yuan (141 papers)
Citations (18)