A Fully-Configurable Open-Source Software-Defined Digital Quantized Spiking Neural Core Architecture (2404.02248v1)
Abstract: We introduce QUANTISENC, a fully configurable open-source software-defined digital quantized spiking neural core architecture to advance research in neuromorphic computing. QUANTISENC is designed hierarchically using a bottom-up methodology with multiple neurons in each layer and multiple layers in each core. The number of layers and neurons per layer can be configured via software in a top-down methodology to generate the hardware for a target spiking neural network (SNN) model. QUANTISENC uses leaky integrate and fire neurons (LIF) and current-based excitatory and inhibitory synapses (CUBA). The nonlinear dynamics of a neuron can be configured at run-time via programming its internal control registers. Each neuron performs signed fixed-point arithmetic with user-defined quantization and decimal precision. QUANTISENC supports all-to-all, one-to-one, and Gaussian connections between layers. Its hardware-software interface is integrated with a PyTorch-based SNN simulator. This integration allows to define and train an SNN model in PyTorch and evaluate the hardware performance (e.g., area, power, latency, and throughput) through FPGA prototyping and ASIC design. The hardware-software interface also takes advantage of the layer-based architecture and distributed memory organization of QUANTISENC to enable pipelining by overlapping computations on streaming data. Overall, the proposed software-defined hardware design methodology offers flexibility similar to that of high-level synthesis (HLS), but provides better hardware performance with zero hardware development effort. We evaluate QUANTISENC using three spiking datasets and show its superior performance against state-of the-art designs.
- C. Mead, “Neuromorphic electronic systems,” Proceedings of the IEEE, vol. 78, no. 10, pp. 1629–1636, 1990.
- A. Joubert, B. Belhadj, O. Temam, and R. Héliot, “Hardware spiking neurons design: Analog or digital?” in International Joint Conference on Neural Networks (IJCNN). IEEE, 2012.
- W. Maass, “Networks of spiking neurons: The third generation of neural network models,” Neural Networks, vol. 10, no. 9, pp. 1659–1671, 1997.
- G. Indiveri, “A low-power adaptive integrate-and-fire neuron circuit,” in International Symposium on Circuits and Systems (ISCAS), 2003.
- M. Ward and O. Rhodes, “Beyond LIF neurons on neuromorphic hardware,” Frontiers in Neuroscience, vol. 16, p. 881598, 2022.
- B. Cessac and T. Viéville, “On dynamics of integrate-and-fire neural networks with conductance based synapses,” Frontiers in Computational Neuroscience, vol. 2, p. 228, 2008.
- M. Fatahi, M. Ahmadi, M. Shahsavari, A. Ahmadi, and P. Devienne, “evt_mnist: A spike based version of traditional mnist,” arXiv, 2016.
- A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza et al., “A low power, fully event-based gesture recognition system,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- B. Cramer, Y. Stradmann et al., “The Heidelberg spiking data sets for the systematic evaluation of spiking neural networks,” IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2020.
- M. Gungor, K. Huang, S. Ioannidis, and M. Leeser, “Optimizing designs using several types of memories on modern FPGAs,” in High Performance Extreme Computing (HPEC), 2022.
- W. Luo, Y. Li, R. Urtasun, and R. Zemel, “Understanding the effective receptive field in deep convolutional neural networks,” Advances in Neural Information Processing Systems (NeurIPS), 2016.
- T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in Intl. Conf. on Computer Vision (ICCV), 2017.
- Q. Wang, M. A. Powell, A. Geisa, E. Bridgeford, C. E. Priebe, and J. T. Vogelstein, “Why do networks have inhibitory/negative connections?” in International Conference on Computer Vision (ICCV), 2023.
- R. Yates, “Fixed-point arithmetic: An introduction,” Digital Signal Labs, vol. 81, no. 83, p. 198, 2009.
- J. K. Eshraghian, M. Ward, E. O. Neftci, X. Wang, G. Lenz, G. Dwivedi, M. Bennamoun, D. S. Jeong, and W. D. Lu, “Training spiking neural networks using lessons from deep learning,” Proc. of the IEEE, 2023.
- W. Fang, Y. Chen, J. Ding, Z. Yu, T. Masquelier, D. Chen, L. Huang, H. Zhou, G. Li, and Y. Tian, “SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence,” Science Advances, vol. 9, no. 40, p. eadi1480, 2023.
- P. Spilger, E. Arnold, L. Blessing, C. Mauch, C. Pehle, E. Müller, and J. Schemmel, “hxtorch. snn: Machine-learning-inspired spiking neural network modeling on brainscales-2,” in Neuro-Inspired Computational Elements Conference, 2023.
- D. F. Goodman and R. Brette, “The brian simulator,” Frontiers in Neuroscience, vol. 3, p. 643, 2009.
- L. Niedermeier, K. Chen, J. Xing, A. Das, J. Kopsick, E. Scott, N. Sutton, K. Weber, N. Dutt, and J. L. Krichmar, “CARLsim 6: an open source library for large-scale, biologically detailed spiking neural network simulation,” in International Joint Conference on Neural Networks (IJCNN), 2022.
- A. Das, “A design flow for scheduling spiking deep convolutional neural networks on heterogeneous neuromorphic system-on-chip,” ACM Transactions on Embedded Computing Systems (TECS), 2023.
- ——, “Real-time scheduling of machine learning operations on heterogeneous neuromorphic SoC,” in International Conference on Formal Methods and Models for System Design (MEMOCODE), 2022.
- V. Kathail, “Xilinx vitis unified software platform,” in International Symposium on Field Programmable Gate Arrays (FPGA), 2020.
- F. Ferrandi, V. G. Castellana, S. Curzel, P. Fezzardi, M. Fiorito, M. Lattuada, M. Minutoli, C. Pilato, and A. Tumeo, “Bambu: An open-source research framework for the high-level synthesis of complex applications,” in Design Automation Conference (DAC), 2021.
- S. Curzel, N. B. Agostini, S. Song, I. Dagli, A. Limaye, C. Tan, M. Minutoli, V. G. Castellana et al., “Automated generation of integrated digital and spiking neuromorphic machine learning accelerators,” in International Conference on Computer-Aided Design (ICCAD), 2021.
- S. Liu, J. Weng, D. Kupsh, A. Sohrabizadeh, Z. Wang, L. Guo, J. Liu, M. Zhulin, R. Mani, L. Zhang et al., “OverGen: Improving FPGA usability through domain-specific overlay generation,” in International Symposium on Microarchitecture (MICRO), 2022.
- J. Cong, J. Lau, G. Liu, S. Neuendorffer, P. Pan, K. Vissers, and Z. Zhang, “FPGA HLS today: Successes, challenges, and opportunities,” ACM Trans, on Reconfigurable Technology and Systems (TRETS), 2022.
- H. Ahangari, M. M. Özdal, and Ö. Öztürk, “HLS-based high-throughput and work-efficient synthesizable graph processing template pipeline,” ACM Trans. on Embedded Computing Systems (TECS), vol. 22, 2023.
- A. M. Abdelsalam, F. Boulet, G. Demers, J. P. Langlois, and F. Cheriet, “An efficient FPGA-based overlay inference architecture for fully connected DNNs,” in International Conference on Reconfigurable Computing and FPGAs (ReConFig), 2018.
- S. Li, Z. Zhang, R. Mao, J. Xiao, L. Chang, and J. Zhou, “A fast and energy-efficient SNN processor with adaptive clock/event-driven computation scheme and online learning,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 68, no. 4, pp. 1543–1552, 2021.
- F. Corradi, G. Adriaans, and S. Stuijk, “Gyro: A digital spiking neural network architecture for multi-sensory data analytics,” in Drone Systems Engineering and Rapid Simulation and Performance Evaluation: Methods and Tools, 2021.
- A. Carpegna, A. Savino, and S. Di Carlo, “Spiker: An FPGA-optimized hardware accelerator for spiking neural networks,” in IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2022.
- H. Liu, Y. Chen, Z. Zeng, M. Zhang, and H. Qu, “A low power and low latency FPGA-based spiking neural network accelerator,” in International Joint Conference on Neural Networks (IJCNN), 2023.
- W. Guo, H. E. Yantır, M. E. Fouda, A. M. Eltawil, and K. N. Salama, “Toward the optimal design and FPGA implementation of spiking neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 8, pp. 3988–4002, 2021.
- W. Ye, Y. Chen, and Y. Liu, “The implementation and optimization of neuromorphic hardware for supporting spiking neural networks with mlp and cnn topologies,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 42, no. 2, pp. 448–461, 2022.
- Z. He, C. Shi, T. Wang, Y. Wang, M. Tian, X. Zhou, P. Li, L. Liu, N. Wu, and G. Luo, “A low-cost FPGA implementation of spiking extreme learning machine with on-chip reward-modulated STDP learning,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 69, 2021.
- W. He, Y. Wu, L. Deng, G. Li, H. Wang, Y. Tian, W. Ding, W. Wang, and Y. Xie, “Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences,” Neural Networks, 2020.
- I. Hammouamri, I. Khalfaoui-Hassani, and T. Masquelier, “Learning delays in spiking neural networks using dilated convolutions with learnable spacings,” arXiv preprint arXiv:2306.17670, 2023.