Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Topological Representations of Heterogeneous Learning Dynamics of Recurrent Spiking Neural Networks (2403.12462v1)

Published 19 Mar 2024 in cs.NE and cs.AI

Abstract: Spiking Neural Networks (SNNs) have become an essential paradigm in neuroscience and artificial intelligence, providing brain-inspired computation. Recent advances in literature have studied the network representations of deep neural networks. However, there has been little work that studies representations learned by SNNs, especially using unsupervised local learning methods like spike-timing dependent plasticity (STDP). Recent work by \cite{barannikov2021representation} has introduced a novel method to compare topological mappings of learned representations called Representation Topology Divergence (RTD). Though useful, this method is engineered particularly for feedforward deep neural networks and cannot be used for recurrent networks like Recurrent SNNs (RSNNs). This paper introduces a novel methodology to use RTD to measure the difference between distributed representations of RSNN models with different learning methods. We propose a novel reformulation of RSNNs using feedforward autoencoder networks with skip connections to help us compute the RTD for recurrent networks. Thus, we investigate the learning capabilities of RSNN trained using STDP and the role of heterogeneity in the synaptic dynamics in learning such representations. We demonstrate that heterogeneous STDP in RSNNs yield distinct representations than their homogeneous and surrogate gradient-based supervised learning counterparts. Our results provide insights into the potential of heterogeneous SNN models, aiding the development of more efficient and biologically plausible hybrid artificial intelligence systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Representation topology divergence: A method for comparing neural network representations. arXiv preprint arXiv:2201.00058, 2021.
  2. Introduction to spiking neural networks: Information processing, learning and applications. Acta neurobiologiae experimentalis, 71(4):409–433, 2011.
  3. Mathematical formulations of hebbian learning. Biological cybernetics, 87(5):404–415, 2002.
  4. Brain-inspired spatiotemporal processing algorithms for efficient event-based perception. In 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 1–6. IEEE, 2023.
  5. Characterization of generalizability of spike timing dependent plasticity trained spiking neural networks. Frontiers in Neuroscience, 15:695357, 2021.
  6. Surrogate gradient learning in spiking neural networks. IEEE Signal Processing Magazine, 36:61–63, 2019.
  7. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE transactions on computer-aided design of integrated circuits and systems, 34(10):1537–1557, 2015.
  8. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro, 38(1):82–99, 2018.
  9. Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165, 2017.
  10. Sparnet: Sparse asynchronous neural network execution for energy efficient inference. In 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), pages 256–260. IEEE, 2020.
  11. Brain-inspired spiking neural network for online unsupervised time series prediction. In 2023 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2023.
  12. Temporal spike sequence learning via backpropagation for deep spiking neural networks. Advances in Neural Information Processing Systems, 33:12022–12033, 2020.
  13. A fully spiking hybrid neural network for energy-efficient object detection. IEEE Transactions on Image Processing, 30:9014–9029, 2021.
  14. Spiking-yolo: Spiking neural network for energy-efficient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11270–11277, 2020.
  15. Function approximation with spiked random networks. IEEE Transactions on Neural Networks, 10(1):3–9, 1999.
  16. A spiking neural network architecture for nonlinear function approximation. Neural Networks, 14(6):933–939, 2001.
  17. Neural heterogeneity promotes robust learning. bioRxiv, 12(1):2020–12, 2021.
  18. Heterogeneous recurrent spiking neural network for spatio-temporal classification. Frontiers in Neuroscience, 17:994517, 2023.
  19. Heterogeneous neuronal and synaptic dynamics for spike-efficient unsupervised learning: Theory and design principles. In Published in The Eleventh International Conference on Learning Representations, volume 17, page 994517, 2023.
  20. A heterogeneous spiking neural network for unsupervised learning of spatiotemporal patterns. Frontiers in Neuroscience, 14:1406, 2021.
  21. Sparse spiking neural network: Exploiting heterogeneity in timescales for pruning recurrent SNN. In The Twelfth International Conference on Learning Representations, 2024.
  22. Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in systems neuroscience, page 4, 2008.
  23. Deep supervised, but not unsupervised, models may explain it cortical representation. PLoS computational biology, 10(11):e1003915, 2014.
  24. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619–8624, 2014.
  25. On the expressive power of deep neural networks. In international conference on machine learning, pages 2847–2854. PMLR, 2017.
  26. Insights on representational similarity in neural networks with canonical correlation. Advances in neural information processing systems, 31, 2018.
  27. Similarity of neural network representations revisited. In International conference on machine learning, pages 3519–3529. PMLR, 2019.
  28. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. arXiv preprint arXiv:2010.15327, 2020.
  29. Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems, 34:12116–12128, 2021.
  30. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature communications, 11(1):3625, 2020.
  31. Uncovering the representation of spiking neural networks trained with surrogate gradient. arXiv preprint arXiv:2304.13098, 2023.
  32. Spike-timing-dependent plasticity and reliability optimization: the role of neuron dynamics. Neural computation, 23(7):1768–1789, 2011.
  33. Computation through neural population dynamics. Annual review of neuroscience, 43:249–275, 2020.
  34. A spike train distance robust to firing rate changes based on the earth mover’s distance. Frontiers in Computational Neuroscience, 13:82, 2019.
  35. The earth mover’s distance as a metric for image retrieval. International journal of computer vision, 40:99–121, 2000.
  36. Cifar10-dvs: an event-stream dataset for object classification. Frontiers in neuroscience, 11:309, 2017.
  37. The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 33(7):2744–2757, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Biswadeep Chakraborty (22 papers)
  2. Saibal Mukhopadhyay (56 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.