Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Forward Direct Feedback Alignment for Online Gradient Estimates of Spiking Neural Networks (2403.08804v1)

Published 6 Feb 2024 in cs.NE and cs.LG

Abstract: There is an interest in finding energy efficient alternatives to current state of the art neural network training algorithms. Spiking neural network are a promising approach, because they can be simulated energy efficiently on neuromorphic hardware platforms. However, these platforms come with limitations on the design of the training algorithm. Most importantly, backpropagation cannot be implemented on those. We propose a novel neuromorphic algorithm, the \textit{Spiking Forward Direct Feedback Alignment} (SFDFA) algorithm, an adaption of \textit{Forward Direct Feedback Alignment} to train SNNs. SFDFA estimates the weights between output and hidden neurons as feedback connections. The main contribution of this paper is to describe how exact local gradients of spikes can be computed in an online manner while taking into account the intra-neuron dependencies between post-synaptic spikes and derive a dynamical system for neuromorphic hardware compatibility. We compare the SFDFA algorithm with a number of competitor algorithms and show that the proposed algorithm achieves higher performance and convergence rates.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Bacho F, Chu D (2024) Low-variance forward gradients using direct feedback alignment and momentum. Neural Networks 169:572–583. https://doi.org/10.1016/j.neunet.2023.10.051, URL https://www.sciencedirect.com/science/article/pii/S0893608023006172 Bellec et al [2020] Bellec G, Scherr F, Subramoney A, et al (2020) A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications 11(1). 10.1038/s41467-020-17236-y Bohté et al [2000] Bohté SM, Kok JN, Poutré HL (2000) Spikeprop: backpropagation for networks of spiking neurons. In: The European Symposium on Artificial Neural Networks, URL https://api.semanticscholar.org/CorpusID:14069916 Cohen et al [2017] Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Bellec G, Scherr F, Subramoney A, et al (2020) A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications 11(1). 10.1038/s41467-020-17236-y Bohté et al [2000] Bohté SM, Kok JN, Poutré HL (2000) Spikeprop: backpropagation for networks of spiking neurons. In: The European Symposium on Artificial Neural Networks, URL https://api.semanticscholar.org/CorpusID:14069916 Cohen et al [2017] Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Bohté SM, Kok JN, Poutré HL (2000) Spikeprop: backpropagation for networks of spiking neurons. In: The European Symposium on Artificial Neural Networks, URL https://api.semanticscholar.org/CorpusID:14069916 Cohen et al [2017] Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  2. Bellec G, Scherr F, Subramoney A, et al (2020) A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications 11(1). 10.1038/s41467-020-17236-y Bohté et al [2000] Bohté SM, Kok JN, Poutré HL (2000) Spikeprop: backpropagation for networks of spiking neurons. In: The European Symposium on Artificial Neural Networks, URL https://api.semanticscholar.org/CorpusID:14069916 Cohen et al [2017] Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Bohté SM, Kok JN, Poutré HL (2000) Spikeprop: backpropagation for networks of spiking neurons. In: The European Symposium on Artificial Neural Networks, URL https://api.semanticscholar.org/CorpusID:14069916 Cohen et al [2017] Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  3. Bohté SM, Kok JN, Poutré HL (2000) Spikeprop: backpropagation for networks of spiking neurons. In: The European Symposium on Artificial Neural Networks, URL https://api.semanticscholar.org/CorpusID:14069916 Cohen et al [2017] Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  4. Cohen G, Afshar S, Tapson J, et al (2017) Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp 2921–2926 Comsa et al [2020] Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  5. Comsa IM, Potempa K, Versari L, et al (2020) Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation Crafton et al [2019] Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  6. Crafton B, Parihar A, Gebhardt E, et al (2019) Direct feedback alignment with sparse connections for local learning. Frontiers in neuroscience 13:525 Cramer et al [2022] Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  7. Cramer B, Stradmann Y, Schemmel J, et al (2022) The heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems 33(7):2744–2757 Davies et al [2018] Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  8. Davies M, Srinivasa N, Lin TH, et al (2018) Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1):82–99. 10.1109/MM.2018.112130359 Frady et al [2022] Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  9. Frady EP, Sanborn S, Shrestha SB, et al (2022) Efficient neuromorphic signal processing with resonator neurons. Journal of Signal Processing Syst 94(10):917?927. 10.1007/s11265-022-01772-5 Furber et al [2014] Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  10. Furber SB, Galluppi F, Temple S, et al (2014) The spinnaker project. Proceedings of the IEEE 102(5):652–665. 10.1109/JPROC.2014.2304638 Gerstner and Kistler [2002] Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  11. Gerstner W, Kistler WM (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 10.1017/CBO9780511815706 Göltz et al [2021] Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  12. Göltz J, Kriener L, Baumbach A, et al (2021) Fast and energy-efficient neuromorphic deep learning with first-spike times 3(9):823–835 Han and Yoo [2019] Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  13. Han D, Yoo Hj (2019) Direct feedback alignment based convolutional neural network training for low-power online learning processor. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 2445–2452 Hong et al [2020] Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  14. Hong C, Wei X, Wang J, et al (2020) Training spiking neural networks for cognitive tasks: A versatile framework compatible with various temporal codes. IEEE Transactions on Neural Networks and Learning Systems 31(4):1285–1296. 10.1109/tnnls.2019.2919662, URL https://doi.org/10.1109/tnnls.2019.2919662 Huo et al [2018] Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  15. Huo Z, Gu B, Yang Q, et al (2018) Decoupled parallel backpropagation with convergence guarantee. 1804.10574 Jin et al [2018] Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  16. Jin Y, Zhang W, Li P (2018) Hybrid macro/micro level backpropagation for training deep spiking neural networks. Curran Associates Inc., Red Hook, NY, USA Kingma and Ba [2017] Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  17. Kingma DP, Ba J (2017) Adam: A method for stochastic optimization. 1412.6980 Launay et al [2020] Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  18. Launay J, Poli I, Boniface F, et al (2020) Direct feedback alignment scales to modern deep learning tasks and architectures. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’20 LeCun et al [2010] LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  19. LeCun Y, Cortes C, Burges C (2010) Mnist handwritten digit database. ATT Labs [Online] Available: http://yannlecuncom/exdb/mnist 2 Lee et al [2020] Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  20. Lee J, Zhang R, Zhang W, et al (2020) Spike-train level direct feedback alignment: Sidestepping backpropagation for on-chip training of spiking neural nets. Frontiers in Neuroscience 14. 10.3389/fnins.2020.00143 Mostafa [2016] Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  21. Mostafa H (2016) Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems PP Neftci et al [2017a] Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  22. Neftci EO, Augustine C, Paul S, et al (2017a) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11:324 Neftci et al [2017b] Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  23. Neftci EO, Augustine C, Paul S, et al (2017b) Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in Neuroscience 11. 10.3389/fnins.2017.00324, URL https://doi.org/10.3389/fnins.2017.00324 Nøkland [2016] Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  24. Nøkland A (2016) Direct feedback alignment provides learning in deep neural networks. In: Lee D, Sugiyama M, Luxburg U, et al (eds) Advances in Neural Information Processing Systems, vol 29. Curran Associates, Inc. Refinetti et al [2021] Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  25. Refinetti M, D’Ascoli S, Ohana R, et al (2021) Align, then memorise: the dynamics of learning with feedback alignment. In: International Conference on Machine Learning, pp 8925–8935 Rostami et al [2022] Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  26. Rostami A, Vogginger B, Yan Y, et al (2022) E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Frontiers in Neuroscience 16. 10.3389/fnins.2022.1018006, URL https://doi.org/10.3389/fnins.2022.1018006 Rumelhart et al [1986] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  27. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. nature 323(6088):533–536 Shrestha et al [2019] Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  28. Shrestha A, Fang H, Wu Q, et al (2019) Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In: Proceedings of the International Conference on Neuromorphic Systems. Association for Computing Machinery, New York, NY, USA, ICONS ’19, 10.1145/3354265.3354275 Shrestha et al [2021] Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  29. Shrestha A, Fang H, Rider DP, et al (2021) In-hardware learning of multilayer spiking neural networks on a neuromorphic processor. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp 367–372, 10.1109/DAC18074.2021.9586323 Shrestha and Orchard [2018] Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  30. Shrestha SB, Orchard G (2018) Slayer: Spike layer error reassignment in time. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, NIPS’18, p 1419?1428 Takase et al [2009a] Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  31. Takase H, Fujita M, Kawanaka H, et al (2009a) Obstacle to training spikeprop networks ? cause of surges in training process ?. pp 3062–3066, 10.1109/IJCNN.2009.5178756 Takase et al [2009b] Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  32. Takase H, Fujita M, Kawanaka H, et al (2009b) Obstacle to training spikeprop networks ? cause of surges in training process ?. In: 2009 International Joint Conference on Neural Networks, pp 3062–3066, 10.1109/IJCNN.2009.5178756 Wu et al [2018] Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  33. Wu Y, Deng L, Li G, et al (2018) Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience 12. 10.3389/fnins.2018.00331 Wunderlich and Pehle [2021] Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  34. Wunderlich TC, Pehle C (2021) Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports 11(1) Xiao et al [2017] Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  35. Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Zheng et al [2020] Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189 Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189
  36. Zheng H, Wu Y, Deng L, et al (2020) Going deeper with directly-trained larger spiking neural networks. In: AAAI Conference on Artificial Intelligence, URL https://api.semanticscholar.org/CorpusID:226290189

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com