Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 33 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 74 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 362 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Slax: A Composable JAX Library for Rapid and Flexible Prototyping of Spiking Neural Networks (2404.05807v1)

Published 8 Apr 2024 in cs.NE

Abstract: Recent advances to algorithms for training spiking neural networks (SNNs) often leverage their unique dynamics. While backpropagation through time (BPTT) with surrogate gradients dominate the field, a rich landscape of alternatives can situate algorithms across various points in the performance, bio-plausibility, and complexity landscape. Evaluating and comparing algorithms is currently a cumbersome and error-prone process, requiring them to be repeatedly re-implemented. We introduce Slax, a JAX-based library designed to accelerate SNN algorithm design, compatible with the broader JAX and Flax ecosystem. Slax provides optimized implementations of diverse training algorithms, allowing direct performance comparison. Its toolkit includes methods to visualize and debug algorithms through loss landscapes, gradient similarities, and other metrics of model behavior during training.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. TensorFlow Datasets, a collection of ready-to-use datasets. https://www.tensorflow.org/datasets.
  2. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org.
  3. Theano: A python framework for fast computation of mathematical expressions. arXiv e-prints, pages arXiv–1605, 2016.
  4. Exodus: Stable and efficient training of spiking neural networks. Frontiers in Neuroscience, 17:1110444, 2023.
  5. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature communications, 11(1):3625, 2020.
  6. Online spatio-temporal learning in deep neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  7. Torch: a modular machine learning software library. Technical report, Idiap, 2002.
  8. The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 33(7):2744–2757, 2020.
  9. The DeepMind JAX Ecosystem, 2020. URL http://github.com/google-deepmind.
  10. Training spiking neural networks using lessons from deep learning. Proceedings of the IEEE, 2023.
  11. Spikingjelly: An open-source machine learning infrastructure platform for spike-based intelligence. Science Advances, 9(40):eadi1480, 2023. doi: 10.1126/sciadv.adi1480. URL https://www.science.org/doi/abs/10.1126/sciadv.adi1480.
  12. Compiling machine learning programs via high-level tracing. Systems for Machine Learning, 4(9), 2018.
  13. The spinnaker project. Proceedings of the IEEE, 102(5):652–665, 2014.
  14. D. F. Goodman and R. Brette. Brian: a simulator for spiking neural networks in python. Frontiers in neuroinformatics, 2:350, 2008.
  15. MLX: Efficient and flexible machine learning on apple silicon, 2023. URL https://github.com/ml-explore.
  16. Bindsnet: A machine learning-oriented spiking neural networks library in python. Frontiers in Neuroinformatics, 12:89, 2018. ISSN 1662-5196. doi: 10.3389/fninf.2018.00089. URL https://www.frontiersin.org/article/10.3389/fninf.2018.00089.
  17. K. Heckel. kmheckel/spyx: v0.1.0-beta, Aug. 2023. URL https://doi.org/10.5281/zenodo.8241588.
  18. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, pages 675–678, 2014.
  19. A. Kag and V. Saligrama. Training recurrent neural networks via forward propagation through time. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5189–5200. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/kag21a.html.
  20. C. Klos and R.-M. Memmesheimer. Smooth exact gradient descent learning in spiking neural networks. arXiv preprint arXiv:2309.14523, 2023.
  21. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018.
  22. Rockpool documentaton, Sept. 2019. URL https://doi.org/10.5281/zenodo.3773845.
  23. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6):51–63, 2019.
  24. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
  25. C. Pehle and J. E. Pedersen. Norse - A deep learning library for spiking neural networks, Jan. 2021. URL https://doi.org/10.5281/zenodo.4422025. Documentation: https://norse.ai/docs/.
  26. E-prop on spinnaker 2: Exploring online learning in spiking rnns on neuromorphic hardware. Frontiers in Neuroscience, 16:1018006, 2022.
  27. Estimating post-synaptic effects for online training of feed-forward snns. arXiv preprint arXiv:2311.16151, 2023.
  28. P. J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560, 1990.
  29. R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection science, 1(1):87–111, 1989.
  30. T. C. Wunderlich and C. Pehle. Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports, 11(1):12829, 2021.
  31. Training feedback spiking neural networks by implicit differentiation on the equilibrium state. Advances in neural information processing systems, 34:14516–14528, 2021.
  32. Online training through time for spiking neural networks. arXiv preprint arXiv:2210.04195, 2022.
  33. Spiking neural networks and their applications: A review. Brain Sciences, 12(7):863, 2022.
  34. F. Zenke and S. Ganguli. Superspike: Supervised learning in multilayer spiking neural networks. Neural computation, 30(6):1514–1541, 2018.
  35. F. Zenke and W. Gerstner. Limits to high-speed simulations of spiking neural networks using general-purpose computers. Front Neuroinform, 8, 2014. doi: 10.3389/fninf.2014.00076. URL http://journal.frontiersin.org/Journal/10.3389/fninf.2014.00076/abstract.
  36. F. Zenke and T. P. Vogels. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks. Neural computation, 33(4):899–925, 2021.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.