SpiKernel: A Kernel Size Exploration Methodology for Improving Accuracy of the Embedded Spiking Neural Network Systems (2404.01685v3)
Abstract: Spiking Neural Networks (SNNs) can offer ultra-low power/energy consumption for machine learning-based application tasks due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose SpiKernel, a novel methodology that improves the accuracy of SNNs through kernel size exploration. Its key steps include (1) investigating the impact of different kernel sizes on the accuracy, (2) devising new sets of kernel sizes, (3) generating SNN architectures using neural architecture search based on the selected kernel sizes, and (4) analyzing the accuracy-memory trade-offs for SNN model selection. The experimental results show that our SpiKernel achieves higher accuracy than state-of-the-art works (i.e., 93.24% for CIFAR10, 70.84% for CIFAR100, and 62% for TinyImageNet) with less than 10M parameters and up to 4.8x speed-up of searching time, thereby making it suitable for embedded applications.
- R. V. W. Putra and M. Shafique, “Fspinn: An optimization framework for memory-efficient and energy-efficient spiking neural networks,” IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 39, no. 11, pp. 3601–3613, 2020.
- Z. Bing, C. Meschede, F. Röhrbein, K. Huang, and A. C. Knoll, “A survey of robotics control based on learning-inspired spiking neural networks,” Frontiers in Neurorobotics (FNBOT), vol. 12, p. 35, 2018.
- L. Cordone, B. Miramond, and P. Thierion, “Object detection with spiking neural networks on automotive event data,” in Int. Joint Conf. on Neural Networks (IJCNN), 2022, pp. 1–8.
- Y. Luo, Q. Fu, J. Xie, Y. Qin, G. Wu, J. Liu, F. Jiang, Y. Cao, and X. Ding, “Eeg-based emotion classification using spiking neural networks,” IEEE Access, vol. 8, pp. 46 007–46 016, 2020.
- Y. Kim, Y. Li, H. Park, Y. Venkatesha, and P. Panda, “Neural architecture search for spiking neural networks,” in European Conf. on Computer Vision (ECCV), 2022, pp. 36–56.
- B. Na, J. Mok, S. Park, D. Lee, H. Choe, and S. Yoon, “Autosnn: Towards energy-efficient spiking neural networks,” in Int. Conf. on Machine Learning (ICML), 2022, pp. 16 253–16 269.
- C. Lee, S. S. Sarwar, P. Panda, G. Srinivasan, and K. Roy, “Enabling spike-based backpropagation for training deep neural network architectures,” Frontiers in Neuroscience (FNINS), p. 119, 2020.
- H. Zheng, Y. Wu, L. Deng, Y. Hu, and G. Li, “Going deeper with directly-trained larger spiking neural networks,” in AAAI Conf. on Artificial Intelligence (AAAI), vol. 35, no. 12, 2021, pp. 11 062–11 070.
- Y. Wu, L. Deng, G. Li, J. Zhu, Y. Xie, and L. Shi, “Direct training for spiking neural networks: Faster, larger, better,” in AAAI Conf. on Artificial Intelligence (AAAI), vol. 33, no. 01, 2019, pp. 1311–1318.
- W. Fang, Z. Yu, Y. Chen, T. Masquelier, T. Huang, and Y. Tian, “Incorporating learnable membrane time constant to enhance learning of spiking neural networks,” in IEEE/CVF Int. Conf. on Computer Vision (ICCV), 2021, pp. 2661–2671.
- A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
- R. V. W. Putra and M. Shafique, “Spikenas: A fast memory-aware neural architecture search framework for spiking neural network systems,” arXiv preprint arXiv:2402.11322, 2024.
- Rachmad Vidya Wicaksana Putra (30 papers)
- Muhammad Shafique (204 papers)