Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neuromorphic Intermediate Representation: A Unified Instruction Set for Interoperable Brain-Inspired Computing (2311.14641v2)

Published 24 Nov 2023 in cs.NE

Abstract: Spiking neural networks and neuromorphic hardware platforms that simulate neuronal dynamics are getting wide attention and are being applied to many relevant problems using Machine Learning. Despite a well-established mathematical foundation for neural dynamics, there exists numerous software and hardware solutions and stacks whose variability makes it difficult to reproduce findings. Here, we establish a common reference frame for computations in digital neuromorphic systems, titled Neuromorphic Intermediate Representation (NIR). NIR defines a set of computational and composable model primitives as hybrid systems combining continuous-time dynamics and discrete events. By abstracting away assumptions around discretization and hardware constraints, NIR faithfully captures the computational model, while bridging differences between the evaluated implementation and the underlying mathematical formalism. NIR supports an unprecedented number of neuromorphic systems, which we demonstrate by reproducing three spiking neural network models of different complexity across 7 neuromorphic simulators and 4 digital hardware platforms. NIR decouples the development of neuromorphic hardware and software, enabling interoperability between platforms and improving accessibility to multiple neuromorphic technologies. We believe that NIR is a key next step in brain-inspired hardware-software co-evolution, enabling research towards the implementation of energy efficient computational principles of nervous systems. NIR is available at neuroir.org

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. C. Mead. Neuromorphic electronic systems. Proceedings of the IEEE, 78(10):1629–1636, Oct 1990.
  2. Carver Mead. Neuromorphic engineering: In memory of misha mahowald. Neural Computation, 35(3):343–383, Feb 2023.
  3. Neuromorphic silicon neuron circuits. Frontiers in neuroscience, 5:9202, 2011.
  4. Opportunities for neuromorphic computing algorithms and applications. Nature Computational Science, 2(1):10–19, jan 2022.
  5. Bottom-up and top-down approaches for the design of neuromorphic processing systems: Tradeoffs and synergies between natural and artificial intelligence. Proceedings of the IEEE, 111(6):623–652, 2023.
  6. ONNX. Open neural network exchange, Aug 2023.
  7. Mlir: A compiler infrastructure for the end of moore’s law. ArXiv, abs/2002.11054, 2020.
  8. Google Developers Blog.
  9. Tvm: An automated end-to-end optimizing compiler for deep learning. In USENIX Symposium on Operating Systems Design and Implementation, 2018.
  10. Lava - a software framework for neuromorphic computing, October 2023.
  11. Nengo: a python tool for building large-scale functional brain models. Frontiers in Neuroinformatics, 7, 2014.
  12. Norse - A deep learning library for spiking neural networks, January 2021. Documentation: https://norse.ai/docs/.
  13. Rockpool documentaton, September 2019. https://rockpool.ai.
  14. SINABS: A simple Pytorch based SNN library specialised for Speck, 2023. https://github.com/synsense/sinabs.
  15. Training spiking neural networks using lessons from deep learning. Proceedings of the IEEE, 111(9):1016–1054, 2023.
  16. Kade Heckel. kmheckel/spyx: v0.1.0-beta, August 2023. https://doi.org/10.5281/zenodo.8241588.
  17. Efficient neuromorphic signal processing with loihi 2. In 2021 IEEE Workshop on Signal Processing Systems (SiPS), page 254–259, October 2021.
  18. Spinnaker 2: A 10 million core processor system for brain simulation and machine learning. ArXiv, November 2019.
  19. Sub-mw neuromorphic snn audio processing applications with rockpool and xylo. ArXiv, abs/2208.12991, 2022.
  20. The brain’s unique take on algorithms. Nature Communications, 14(11):4910, Aug 2023.
  21. Toward a formal theory for computing machines made out of whatever physics offers. Nature Communications, 14(11):4911, Aug 2023.
  22. Pycarl: A pynn interface for hardware-software co-simulation of spiking neural network. arXiv preprint arXiv:2003.09696, 2020.
  23. Interfacing neuromorphic hardware with machine learning frameworks - a review. In Proceedings of the 2023 International Conference on Neuromorphic Systems, ICONS ’23, New York, NY, USA, 2023. Association for Computing Machinery.
  24. Pynn: a common interface for neuronal network simulators. Frontiers in neuroinformatics, 2, 2009.
  25. LEMS: a language for expressing complex biological models in concise and hierarchical form and its use in underpinning NeuroML 2. Frontiers in Neuroinformatics, 8, 2014.
  26. Composing neural algorithms with fugu. In Proceedings of the International Conference on Neuromorphic Systems, ICONS ’19, page 1–8, New York, NY, USA, Jul 2019. Association for Computing Machinery.
  27. Charles Antony Richard Hoare. Communicating sequential processes. Communications of the ACM, 21(8):666–677, 1978.
  28. Efficient video and audio processing with loihi 2. arXiv preprint arXiv:2310.03251, 2023.
  29. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in neuroscience, 11:682, 2017.
  30. Nxtf: An api and compiler for deep spiking neural networks on intel loihi. ACM Journal on Emerging Technologies in Computing Systems (JETC), 18(3):1–22, 2022.
  31. A system hierarchy for brain-inspired computing. Nature, 586(7829):378–384, 2020.
  32. Compiling spiking neural networks to neuromorphic hardware. In The 21st ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems, LCTES ’20, page 38–50, New York, NY, USA, Jun 2020. Association for Computing Machinery.
  33. Using the ibm analog in-memory hardware acceleration kit for neural network training and inference. arXiv preprint arXiv:2307.09357, 2023.
  34. Bridge the gap between neural networks and neuromorphic hardware with a neural network compiler. In Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’18, page 448–460, New York, NY, USA, Mar 2018. Association for Computing Machinery.
  35. Braille letter reading: A benchmark for spatio-temporal pattern recognition on neuromorphic hardware. Frontiers in Neuroscience, 16, 2022.
  36. Claude E. Shannon. Mathematical theory of the differential analyzer. Journal of Mathematics and Physics, 20(1–4):337–354, 1941.
  37. George H. Mealy. A method for synthesizing sequential circuits. The Bell System Technical Journal, 34(5):1045–1079, Sep 1955.
  38. Geometric numerical integration. Oberwolfach Reports, 3(1):805–882, 2006.
  39. Accuracy and efficiency in fixed-point neural ode solvers. Neural Computation, 27:2148–2182, 2015.
  40. The spinnaker 2 processing element architecture for hybrid digital neuromorphic computing. arXiv preprint arXiv:2103.08392, 2021.
  41. py-spinnaker2, November 2023. https://doi.org/10.5281/zenodo.10202110.
  42. Pytorch: An imperative style, high-performance deep learning library. arXiv:1912.01703 [cs, stat], Dec 2019. arXiv: 1912.01703.
  43. JAX: composable transformations of Python+NumPy programs, 2018. http://github.com/google/jax.
  44. Exodus: Stable and efficient training of spiking neural networks. Frontiers in Neuroscience, 17:1110444, 2023.
  45. Haiku: Sonnet for JAX, 2020. http://github.com/deepmind/dm-haiku.
  46. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press, Cambridge, 2014.
  47. Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in Neuroscience, 9, 2015.
  48. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International joint conference on neural networks (IJCNN), pages 1–8. ieee, 2015.
  49. Human activity recognition: suitability of a neuromorphic approach for on-edge aiot applications. Neuromorphic Computing and Engineering, 2(1):014006, 2022.
  50. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks. Neural computation, 33(4):899–925, 2021.
Citations (16)

Summary

  • The paper introduces NIR, a unified intermediate representation that standardizes computational primitives for neuromorphic systems.
  • The paper validates NIR by reproducing three models consistently across seven simulators and four hardware platforms.
  • The study shows that decoupling hardware-specific constraints from model design enhances interoperability and accelerates neuromorphic innovation.

Overview

Spiking Neural Networks (SNNs) and neuromorphic computing are progressively becoming mainstream to capture the efficiency of neural dynamics similar to the human brain. Neuromorphic systems, which mimic the brain's neural architecture, can operate with lower power and potentially solve complex tasks more efficiently than traditional computing systems. Although plenty of hardware and software have emerged for neuromorphic computing, the lack of a unified intermediate representation across different neuromorphic platforms poses challenges for interoperability and model transfer between systems. To tackle these issues, the Neuromorphic Intermediate Representation (NIR) was established.

Intermediate Representation

The concept of NIR revolves around defining computational primitives, which can model hybrid dynamical systems in continuous time. These primitives are elements capable of portraying basic computational functions that can be graphed together into larger, more complex structures. By composing these primitives into graphs, NIR allows different neuromorphic technology stacks to interpret and map the computations consistently. This approach captures the fundamental computation of neuromorphic models without tying down to the constraints of specific hardware discretization or numerical techniques, facilitating accurate representation across various platforms.

Interoperability and Reproducibility

One of NIR's pivotal advancements is its system-agnostic architecture that makes it possible to execute the same computations on diverse neuromorphic platforms with consistent behavior. The paper reports successful reproduction of three specific computational models across seven simulators and four hardware platforms, highlighting interoperability. As a result, NIR decouples the evolution of neuromorphic hardware and software, enabling seamless transitions between different platforms, which simplifies optimization, improves accessibility, and boosts the speed of neuromorphic technology developments.

Case Studies and Experiments

The paper details experimenting with three different tasks representing common use cases in neuromorphic computing: a leaky integrate-and-fire model, a spiking convolutional network, and a recurrent neural network. While the feed-forward network architectures evaluated showed consistent performance across different platforms, the recurrent network experienced some discrepancies due to platform-specific discretization methods and hardware constraints, shedding light on opportunities for further research into robust network design for neuromorphic hardware.

Conclusion and Open-Source Availability

The research underscores the necessity of a shared representation like NIR in neuromorphic computing and its role in aiding research and practical applications. By offering a common platform to compare and contrast behaviors across a range of hardware and software systems, NIR represents a significant step toward the continued evolution and paper of brain-inspired technologies. The open-source nature of NIR, available on GitHub, ensures that the research community and industry stakeholders can contribute and leverage its capabilities.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com