Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SpikeExplorer: hardware-oriented Design Space Exploration for Spiking Neural Networks on FPGA (2404.03714v1)

Published 4 Apr 2024 in cs.NE and cs.AI

Abstract: One of today's main concerns is to bring Artificial Intelligence power to embedded systems for edge applications. The hardware resources and power consumption required by state-of-the-art models are incompatible with the constrained environments observed in edge systems, such as IoT nodes and wearable devices. Spiking Neural Networks (SNNs) can represent a solution in this sense: inspired by neuroscience, they reach unparalleled power and resource efficiency when run on dedicated hardware accelerators. However, when designing such accelerators, the amount of choices that can be taken is huge. This paper presents SpikExplorer, a modular and flexible Python tool for hardware-oriented Automatic Design Space Exploration to automate the configuration of FPGA accelerators for SNNs. Using Bayesian optimizations, SpikerExplorer enables hardware-centric multi-objective optimization, supporting factors such as accuracy, area, latency, power, and various combinations during the exploration process. The tool searches the optimal network architecture, neuron model, and internal and training parameters, trying to reach the desired constraints imposed by the user. It allows for a straightforward network configuration, providing the full set of explored points for the user to pick the trade-off that best fits the needs. The potential of SpikExplorer is showcased using three benchmark datasets. It reaches 95.8% accuracy on the MNIST dataset, with a power consumption of 180mW/image and a latency of 0.12 ms/image, making it a powerful tool for automatically optimizing SNNs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence. Neural Networks, 121:366–386, jan 2020.
  2. A Low Power, Fully Event-Based Gesture Recognition System. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7388–7397, Honolulu, HI, jul 2017. IEEE.
  3. NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration with Spiking Neural Networks. In International Conference on Neuromorphic Systems 2021, ICONS 2021, pages 1–9, New York, NY, USA, oct 2021. Association for Computing Machinery.
  4. Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions, mar 2022. arXiv:2203.07006 [cs].
  5. Nicolas Brunel and Mark C. W. van Rossum. Quantitative investigations of electrical nerve excitation treated as polarization. Biological Cybernetics, 97(5):341–349, dec 2007.
  6. Spiker: an FPGA-optimized Hardware accelerator for Spiking Neural Networks. In 2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pages 14–19, jul 2022. ISSN: 2159-3477.
  7. Spiker+: a framework for the generation of efficient Spiking Neural Networks FPGA accelerators for inference at the edge, jan 2024. arXiv:2401.01141 [cs].
  8. Trainable quantization for Speedy Spiking Neural Networks. Frontiers in Neuroscience, 17, mar 2023. Publisher: Frontiers.
  9. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 33(7):2744–2757, jul 2022. Conference Name: IEEE Transactions on Neural Networks and Learning Systems.
  10. AutomaticAI – A hybrid approach for automatic artificial intelligence algorithm selection and hyperparameter tuning. Expert Systems with Applications, 182:115225, nov 2021.
  11. Training Spiking Neural Networks Using Lessons From Deep Learning, sep 2021.
  12. A Multi-objective Genetic Algorithm for Design Space Exploration in High-Level Synthesis. In 2008 IEEE Computer Society Annual Symposium on VLSI, pages 417–422, apr 2008. ISSN: 2159-3477.
  13. E3NE: An End-to-End Framework for Accelerating Spiking Neural Networks With Emerging Neural Encoding on FPGAs. IEEE Transactions on Parallel and Distributed Systems, 33(11):3207–3219, nov 2022. Publisher: IEEE Computer Society.
  14. CNN2Gate: An Implementation of Convolutional Neural Networks Inference on FPGAs with Automated Design Space Exploration. Electronics, 9(12):2200, dec 2020. Number: 12 Publisher: Multidisciplinary Digital Publishing Institute.
  15. FPGA Implementation of Simplified Spiking Neural Network. In 2020 27th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pages 1–4, nov 2020.
  16. Hardware implementation of spiking neural networks on FPGA. Tsinghua Science and Technology, 25(4):479–486, aug 2020. Conference Name: Tsinghua Science and Technology.
  17. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4):500–544, aug 1952.
  18. Simplified spiking neural network architecture and STDP learning algorithm applied to image classification. EURASIP Journal on Image and Video Processing, 2015(1):4, feb 2015.
  19. Murat Isik. A Survey of Spiking Neural Network Accelerator on FPGA, jul 2023. arXiv:2307.03910 [cs].
  20. E.M. Izhikevich. Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14(6):1569–1572, nov 2003. Conference Name: IEEE Transactions on Neural Networks.
  21. Neural Architecture Search for Spiking Neural Networks. In Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, editors, Computer Vision – ECCV 2022, Lecture Notes in Computer Science, pages 36–56, Cham, 2022. Springer Nature Switzerland.
  22. Adam: A Method for Stochastic Optimization. CoRR, dec 2014.
  23. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, nov 1998. Conference Name: Proceedings of the IEEE.
  24. Quantization Framework for Fast Spiking Neural Networks. Frontiers in Neuroscience, 16, jul 2022. Publisher: Frontiers.
  25. A Fast and Energy-Efficient SNN Processor With Adaptive Clock/Event-Driven Computation Scheme and Online Learning. IEEE Transactions on Circuits and Systems I: Regular Papers, 68(4):1543–1552, apr 2021. Conference Name: IEEE Transactions on Circuits and Systems I: Regular Papers.
  26. Wolfgang Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9):1659–1671, dec 1997.
  27. James G. March. Exploration and Exploitation in Organizational Learning. Organization Science, 2(1):71–87, feb 1991. Publisher: INFORMS.
  28. Spike-Timing-Dependent Plasticity: A Comprehensive Overview. Frontiers in Synaptic Neuroscience, 4, jul 2012. Publisher: Frontiers.
  29. Kurt Marti. Optimization Under Stochastic Uncertainty: Methods, Control and Random Search Methods, volume 296 of International Series in Operations Research & Management Science. Springer International Publishing, Cham, 2020.
  30. Meta. Ax · Adaptive Experimentation Platform — ax.dev. https://ax.dev. [Accessed 03-04-2024].
  31. The viability of analog-based accelerators for neuromorphic computing: a survey. Neuromorphic Computing and Engineering, 1(1):012001, jul 2021. Publisher: IOP Publishing.
  32. SpinalFlow: An Architecture and Dataflow Tailored for Spiking Neural Networks. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), pages 349–362, may 2020.
  33. Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-Based Optimization to Spiking Neural Networks. IEEE Signal Processing Magazine, 36(6):51–63, nov 2019. Conference Name: IEEE Signal Processing Magazine.
  34. Andrew Y. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, ICML ’04, page 78, New York, NY, USA, jul 2004. Association for Computing Machinery.
  35. Rachmad Vidya Wicaksana Putra and Muhammad Shafique. Q-SpiNN: A Framework for Quantizing Spiking Neural Networks. 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–8, jul 2021. Conference Name: 2021 International Joint Conference on Neural Networks (IJCNN) ISBN: 9781665439008 Place: Shenzhen, China Publisher: IEEE.
  36. A case for efficient accelerator design space exploration via Bayesian optimization. In 2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), pages 1–6, jul 2017.
  37. Learning representations by back-propagating errors. Nature, 323(6088):533–536, oct 1986. Number: 6088 Publisher: Nature Publishing Group.
  38. Design Space Exploration of Approximate Computing Techniques with a Reinforcement Learning Approach. In 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pages 167–170, jun 2023. ISSN: 2325-6664.
  39. From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference, oct 2023. arXiv:2310.03003 [cs].
  40. ReDO: Cross-Layer Multi-Objective Design-Exploration Framework for Efficient Soft Error Resilient Systems. IEEE Transactions on Computers, 67(10):1462–1477, oct 2018. Conference Name: IEEE Transactions on Computers.
  41. Efficient Network Architecture Search Using Hybrid Optimizer. Entropy, 24(5):656, may 2022. Number: 5 Publisher: Multidisciplinary Digital Publishing Institute.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com