Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spiker+: a framework for the generation of efficient Spiking Neural Networks FPGA accelerators for inference at the edge (2401.01141v1)

Published 2 Jan 2024 in cs.NE, cs.AI, and cs.AR

Abstract: Including Artificial Neural Networks in embedded systems at the edge allows applications to exploit Artificial Intelligence capabilities directly within devices operating at the network periphery. This paper introduces Spiker+, a comprehensive framework for generating efficient, low-power, and low-area customized Spiking Neural Networks (SNN) accelerators on FPGA for inference at the edge. Spiker+ presents a configurable multi-layer hardware SNN, a library of highly efficient neuron architectures, and a design framework, enabling the development of complex neural network accelerators with few lines of Python code. Spiker+ is tested on two benchmark datasets, the MNIST and the Spiking Heidelberg Digits (SHD). On the MNIST, it demonstrates competitive performance compared to state-of-the-art SNN accelerators. It outperforms them in terms of resource allocation, with a requirement of 7,612 logic cells and 18 Block RAMs (BRAMs), which makes it fit in very small FPGA, and power consumption, draining only 180mW for a complete inference on an input image. The latency is comparable to the ones observed in the state-of-the-art, with 780us/img. To the authors' knowledge, Spiker+ is the first SNN accelerator tested on the SHD. In this case, the accelerator requires 18,268 logic cells and 51 BRAM, with an overall power consumption of 430mW and a latency of 54 us for a complete inference on input data. This underscores the significance of Spiker+ in the hardware-accelerated SNN landscape, making it an excellent solution to deploy configurable and tunable SNN architectures in resource and power-constrained edge applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Z. Chang, S. Liu, X. Xiong, Z. Cai, and G. Tu, “A survey of recent advances in edge-computing-powered artificial intelligence of things,” IEEE Internet of Things Journal, vol. 8, no. 18, pp. 13 849–13 875, 2021.
  2. J. Xue, L. Xie, F. Chen, L. Wu, Q. Tian, Y. Zhou, R. Ying, and P. Liu, “Edgemap: An optimized mapping toolchain for spiking neural network in edge computing,” Sensors, vol. 23, no. 14, p. 6548, 2023.
  3. P. Dhilleswararao, S. Boppu, M. S. Manikandan, and L. R. Cenkeramaddi, “Efficient hardware architectures for accelerating deep neural networks: Survey,” IEEE Access, vol. 10, pp. 131 788–131 828, 2022.
  4. A. Carpegna, A. Savino, and S. Di Carlo, “Spiker: an FPGA-optimized Hardware accelerator for Spiking Neural Networks,” in 2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2022, pp. 14–19, iSSN: 2159-3477.
  5. E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-Based Optimization to Spiking Neural Networks,” IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 51–63, Nov. 2019, conference Name: IEEE Signal Processing Magazine.
  6. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998, conference Name: Proceedings of the IEEE.
  7. B. Cramer, Y. Stradmann, J. Schemmel, and F. Zenke, “The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 7, pp. 2744–2757, Jul. 2022, conference Name: IEEE Transactions on Neural Networks and Learning Systems.
  8. B. Yin, F. Corradi, and S. M. Bohté, “Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks,” Nature Machine Intelligence, vol. 3, no. 10, pp. 905–913, Oct. 2021, number: 10 Publisher: Nature Publishing Group.
  9. A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of physiology, vol. 117, no. 4, pp. 500–544, 1952.
  10. E. Izhikevich, “Simple model of spiking neurons,” IEEE transactions on neural networks, vol. 14, no. 6, pp. 1569–1572, 2003.
  11. R. Brette and W. Gerstner, “Adaptive exponential integrate-and-fire model as an effective description of neuronal activity,” Journal of Neurophysiology, vol. 94, no. 5, pp. 3637–3642, 2005.
  12. G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. Legenstein, and W. Maass, “A solution to the learning dilemma for recurrent networks of spiking neurons,” Nature Communications, vol. 11, no. 1, p. 3625, Jul. 2020, number: 1 Publisher: Nature Publishing Group.
  13. M. Stimberg, R. Brette, and D. F. Goodman, “Brian 2, an intuitive and efficient neural simulator,” eLife, vol. 8, p. e47314, aug 2019.
  14. N. Rathi, I. Chakraborty, A. Kosta, A. Sengupta, A. Ankit, P. Panda, and K. Roy, “Exploring neuromorphic computing based on spiking neural networks: Algorithms to hardware,” ACM Comput. Surv., vol. 55, no. 12, mar 2023.
  15. A. Basu, C. Frenkel, L. Deng, and X. Zhang, “Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions,” Mar. 2022, arXiv:2203.07006 [cs].
  16. F. Pavanello, E. I. Vatajelu, A. Bosio, T. Van Vaerenbergh, P. Bienstman, B. Charbonnier, A. Carpegna, S. Di Carlo, and A. Savino, “Special Session: Neuromorphic hardware design and reliability from traditional CMOS to emerging technologies,” in 2023 IEEE 41st VLSI Test Symposium (VTS), 2023, pp. 1–10.
  17. C. Mayr, S. Hoeppner, and S. Furber, “SpiNNaker 2: A 10 Million Core Processor System for Brain Simulation and Machine Learning,” Nov. 2019, arXiv:1911.02385 [cs].
  18. “Next-Level Neuromorphic Computing: Intel Lab’s Loihi 2 Chip.” [Online]. Available: https://www.intel.com/content/www/us/en/research/neuromorphic-computing-loihi-2-technology-brief.html
  19. F. Akopyan, J. Sawada, A. Cassidy, R. Alvarez-Icaza, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G.-J. Nam, B. Taba, M. Beakes, B. Brezzo, J. B. Kuang, R. Manohar, W. P. Risk, B. Jackson, and D. S. Modha, “TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 10, pp. 1537–1557, Oct. 2015, conference Name: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.
  20. L. Deng, G. Wang, G. Li, S. Li, L. Liang, M. Zhu, Y. Wu, Z. Yang, Z. Zou, J. Pei, Z. Wu, X. Hu, Y. Ding, W. He, Y. Xie, and L. Shi, “Tianjic: A Unified and Scalable Chip Bridging Spike-Based and Continuous Neural Computation,” IEEE Journal of Solid-State Circuits, vol. 55, no. 8, pp. 2228–2246, Aug. 2020, conference Name: IEEE Journal of Solid-State Circuits.
  21. O. Richter, C. Wu, A. M. Whatley, G. Köstinger, C. Nielsen, N. Qiao, and G. Indiveri, “DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor,” Nov. 2023, arXiv:2310.00564 [cs].
  22. B. V. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A. R. Chandrasekaran, J.-M. Bussat, R. Alvarez-Icaza, J. V. Arthur, P. A. Merolla, and K. Boahen, “Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations,” Proceedings of the IEEE, vol. 102, no. 5, pp. 699–716, May 2014, conference Name: Proceedings of the IEEE.
  23. C. Pehle, S. Billaudelle, B. Cramer, J. Kaiser, K. Schreiber, Y. Stradmann, J. Weis, A. Leibfried, E. Müller, and J. Schemmel, “The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity,” Frontiers in Neuroscience, vol. 16, 2022.
  24. “Welcome to Rockpool — Rockpool 2.7 documentation.” [Online]. Available: https://rockpool.ai/index.html
  25. “Nengo.” [Online]. Available: https://www.nengo.ai/
  26. S. Narayanan, K. Taht, R. Balasubramonian, E. Giacomin, and P.-E. Gaillardon, “SpinalFlow: An Architecture and Dataflow Tailored for Spiking Neural Networks,” in 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), May 2020, pp. 349–362.
  27. D. Ma, J. Shen, Z. Gu, M. Zhang, X. Zhu, X. Xu, Q. Xu, Y. Shen, and G. Pan, “Darwin: A neuromorphic hardware co-processor based on spiking neural networks,” Journal of Systems Architecture, vol. 77, pp. 43–51, Jun. 2017.
  28. C. Frenkel, J.-D. Legat, and D. Bol, “A 65-nm 738k-Synapse/mm2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning,” in 2019 IEEE International Symposium on Circuits and Systems (ISCAS), May 2019, pp. 1–5, iSSN: 2158-1525.
  29. C. Frenkel, M. Lefebvre, J.-D. Legat, and D. Bol, “A 0.086-mm^2 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS,” IEEE Transactions on Biomedical Circuits and Systems, vol. 13, no. 1, pp. 145–158, Feb. 2019, conference Name: IEEE Transactions on Biomedical Circuits and Systems.
  30. C. Frenkel and G. Indiveri, “ReckOn: A 28nm Sub-mm2 Task-Agnostic Spiking Recurrent Neural Network Processor Enabling On-Chip Learning over Second-Long Timescales,” in 2022 IEEE International Solid- State Circuits Conference (ISSCC), vol. 65, Feb. 2022, pp. 1–3, iSSN: 2376-8606.
  31. D. Neil and S.-C. Liu, “Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 22, no. 12, pp. 2621–2628, Dec. 2014, conference Name: IEEE Transactions on Very Large Scale Integration (VLSI) Systems.
  32. H. Liu, Y. Chen, Z. Zeng, M. Zhang, and H. Qu, “A Low Power and Low Latency FPGA-Based Spiking Neural Network Accelerator,” in 2023 International Joint Conference on Neural Networks (IJCNN), Jun. 2023, pp. 1–8, iSSN: 2161-4407.
  33. S. Gupta, A. Vyas, and G. Trivedi, “FPGA Implementation of Simplified Spiking Neural Network,” in 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Nov. 2020, pp. 1–4.
  34. J. Li, G. Shen, D. Zhao, Q. Zhang, and Y. Zeng, “FireFly: A High-Throughput Hardware Accelerator for Spiking Neural Networks With Efficient DSP and Memory Optimization,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 31, no. 8, pp. 1178–1191, Aug. 2023, conference Name: IEEE Transactions on Very Large Scale Integration (VLSI) Systems.
  35. S. Panchapakesan, Z. Fang, and J. Li, “SyncNN: Evaluating and Accelerating Spiking Neural Networks on FPGAs,” in 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), Aug. 2021, pp. 286–293, iSSN: 1946-1488.
  36. A. Khodamoradi, K. Denolf, and R. Kastner, “S2N2: A FPGA Accelerator for Streaming Spiking Neural Networks,” in The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA ’21.   New York, NY, USA: Association for Computing Machinery, Feb. 2021, pp. 194–205.
  37. J. Han, Z. Li, W. Zheng, and Y. Zhang, “Hardware implementation of spiking neural networks on FPGA,” Tsinghua Science and Technology, vol. 25, no. 4, pp. 479–486, Aug. 2020, conference Name: Tsinghua Science and Technology.
  38. G. Zhang, B. Li, J. Wu, R. Wang, Y. Lan, L. Sun, S. Lei, H. Li, and Y. Chen, “A low-cost and high-speed hardware implementation of spiking neural network,” Neurocomputing, vol. 382, pp. 106–115, Mar. 2020.
  39. Y. Nevarez, D. Rotermund, K. R. Pawelzik, and A. Garcia-Ortiz, “Accelerating Spike-by-Spike Neural Networks on FPGA With Hybrid Custom Floating-Point and Logarithmic Dot-Product Approximation,” IEEE Access, vol. 9, pp. 80 603–80 620, 2021, conference Name: IEEE Access.
  40. H. Asgari, B. M.-N. Maybodi, M. Payvand, and M. R. Azghadi, “Low-Energy and Fast Spiking Neural Network For Context-Dependent Learning on FPGA,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 11, pp. 2697–2701, Nov. 2020, conference Name: IEEE Transactions on Circuits and Systems II: Express Briefs.
  41. S. Li, Z. Zhang, R. Mao, J. Xiao, L. Chang, and J. Zhou, “A Fast and Energy-Efficient SNN Processor With Adaptive Clock/Event-Driven Computation Scheme and Online Learning,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 68, no. 4, pp. 1543–1552, Apr. 2021, conference Name: IEEE Transactions on Circuits and Systems I: Regular Papers.
  42. N. Abderrahmane, E. Lemaire, and B. Miramond, “Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence,” Neural Networks, vol. 121, pp. 366–386, Jan. 2020.
  43. D. Gerlinghoff, Z. Wang, X. Gu, R. S. M. Goh, and T. Luo, “E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks With Emerging Neural Encoding on FPGAs,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 11, pp. 3207–3219, Nov. 2022, publisher: IEEE Computer Society.
  44. P. Diehl and M. Cook, “Unsupervised learning of digit recognition using spike-timing-dependent plasticity,” Frontiers in Computational Neuroscience, vol. 9, 2015.
Citations (6)

Summary

We haven't generated a summary for this paper yet.