Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural Networks (2402.18994v1)

Published 29 Feb 2024 in cs.NE and cs.LG

Abstract: As the role of artificial intelligence becomes increasingly pivotal in modern society, the efficient training and deployment of deep neural networks have emerged as critical areas of focus. Recent advancements in attention-based large neural architectures have spurred the development of AI accelerators, facilitating the training of extensive, multi-billion parameter models. Despite their effectiveness, these powerful networks often incur high execution costs in production environments. Neuromorphic computing, inspired by biological neural processes, offers a promising alternative. By utilizing temporally-sparse computations, Spiking Neural Networks (SNNs) offer to enhance energy efficiency through a reduced and low-power hardware footprint. However, the training of SNNs can be challenging due to their recurrent nature which cannot as easily leverage the massive parallelism of modern AI accelerators. To facilitate the investigation of SNN architectures and dynamics researchers have sought to bridge Python-based deep learning frameworks such as PyTorch or TensorFlow with custom-implemented compute kernels. This paper introduces Spyx, a new and lightweight SNN simulation and optimization library designed in JAX. By pre-staging data in the expansive vRAM of contemporary accelerators and employing extensive JIT compilation, Spyx allows for SNN optimization to be executed as a unified, low-level program on NVIDIA GPUs or Google TPUs. This approach achieves optimal hardware utilization, surpassing the performance of many existing SNN training frameworks while maintaining considerable flexibility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
  2. Sara Hooker. The hardware lottery. CoRR, abs/2009.06489, 2020.
  3. Energy and policy considerations for deep learning in nlp, 2019.
  4. mlgenn: accelerating snn inference using gpu-enabled neural networks. Neuromorphic Computing and Engineering, 2(2):024002, mar 2022.
  5. Spikingjelly: An open-source machine learning infrastructure platform for spike-based intelligence. Science Advances, 9(40):eadi1480, 2023.
  6. Training spiking neural networks using lessons from deep learning. Proceedings of the IEEE, 111(9):1016–1054, 2023.
  7. Norse - A deep learning library for spiking neural networks, January 2021. Documentation: https://norse.ai/docs/.
  8. Lava: A software framework for neuromorphic computing, 2021.
  9. Rockpool documentaton, September 2019.
  10. Efficient neuromorphic signal processing with loihi 2, 2021.
  11. JAX: composable transformations of Python+NumPy programs, 2018.
  12. Brax - a differentiable physics engine for large scale rigid body simulation, 2021.
  13. Robert Tjarko Lange. gymnax: A JAX-based reinforcement learning environment library, 2022.
  14. Jumanji: a diverse suite of scalable reinforcement learning environments in jax, 2023.
  15. Pgx: Hardware-accelerated parallel game simulators for reinforcement learning. In Advances in Neural Information Processing Systems, 2023.
  16. Robert Tjarko Lange. evosax: Jax-based evolution strategies. arXiv preprint arXiv:2212.04180, 2022.
  17. Harnessing manycore processors with distributed memory for accelerated training of sparse and recurrent models, 2023.
  18. jaxsnn: Event-driven gradient estimation for analog neuromorphic hardware. arXiv preprint arXiv:2401.16841, 2024.
  19. Fast and energy-efficient neuromorphic deep learning with first-spike times. Nature Machine Intelligence, 3(9):823–835, September 2021.
  20. Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports, 11(1), June 2021.
  21. The brainscales-2 accelerated neuromorphic system with hybrid plasticity. Frontiers in Neuroscience, 16, 2022.
  22. kmheckel/spyx: v0.1.17, February 2024.
  23. Kade Heckel. Neuroevolution of spiking neural networks, February 2024.
  24. kmheckel/spyx: Paper edition, February 2024.
  25. Superspike: Supervised learning in multilayer spiking neural networks. Neural Computation, 30(6):1514–1541, June 2018.
  26. Synaptic plasticity dynamics for deep continuous local learning (decolle). Frontiers in Neuroscience, 14, 2020.
  27. Optax: composable gradient transformation and optimisation, in jax!, 2020.
  28. Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computing. arXiv, 2023.
  29. Heidelberg spiking datasets, 2019.
  30. Converting static image datasets to spiking neuromorphic datasets using saccades. CoRR, abs/1507.07629, 2015.
  31. Amit Sabne. Xla : Compiling machine learning for peak performance, 2020.
  32. Deep phasor networks: Connecting conventional and spiking neural networks, 2021.
  33. Deep learning in spiking phasor neural networks, 2022.
  34. Sidi Yaya Arnaud Yarga and Sean U. N. Wood. Accelerating snn training with stochastic parallelizable spiking neurons. In 2023 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2023.
  35. Learning long sequences in spiking neural networks, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Kade M. Heckel (3 papers)
  2. Thomas Nowotny (11 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.