Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 168 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 214 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

On Leaky-Integrate-and Fire as Spike-Train-Quantization Operator on Dirac-Superimposed Continuous-Time Signals (2402.07954v1)

Published 10 Feb 2024 in cs.NE and eess.SP

Abstract: Leaky-integrate-and-fire (LIF) is studied as a non-linear operator that maps an integrable signal $f$ to a sequence $\eta_f$ of discrete events, the spikes. In the case without any Dirac pulses in the input, it makes no difference whether to set the neuron's potential to zero or to subtract the threshold $\vartheta$ immediately after a spike triggering event. However, in the case of superimpose Dirac pulses the situation is different which raises the question of a mathematical justification of each of the proposed reset variants. In the limit case of zero refractory time the standard reset scheme based on threshold subtraction results in a modulo-based reset scheme which allows to characterize LIF as a quantization operator based on a weighted Alexiewicz norm $|.|{A, \alpha}$ with leaky parameter $\alpha$. We prove the quantization formula $|\eta_f - f|{A, \alpha} < \vartheta$ under the general condition of local integrability, almost everywhere boundedness and locally finitely many superimposed weighted Dirac pulses which provides a much larger signal space and more flexible sparse signal representation than manageable by classical signal processing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press, USA, 2014. ISBN 1107635195.
  2. Spiking neural networks: A survey. IEEE Access, 10:60738–60764, 2022. doi:10.1109/ACCESS.2022.3179968.
  3. Bernhard A. Moser. Similarity recovery from threshold-based sampling under general conditions. IEEE Transactions on Signal Processing, 65(17):4645–4654, 2017. doi:10.1109/TSP.2017.2712121.
  4. On quasi-isometry of threshold-based sampling. IEEE Transactions on Signal Processing, 67(14):3832–3841, 2019. doi:10.1109/TSP.2019.2919415.
  5. Spiking neural networks in the Alexiewicz topology: A new perspective on analysis and error bounds. arXiv preprint arXiv:2305.05772, 2023. URL https://doi.org/10.48550/arXiv.2305.05772.
  6. Robert G Bartle. Modern Theory of Integration. Graduate Studies In Mathematics. American Mathematical Society, India, 2001.
  7. Theories of Integration: The Integrals of Riemann, Lebesgue, Henstock-Kurzweil, and McShane. Series in Real Analysis. Singapore: World Scientific, 2004. ISBN 9812388435.
  8. Training spiking neural networks using lessons from deep learning. arXiv, 2021. doi:10.48550/ARXIV.2109.12894. URL https://arxiv.org/abs/2109.12894.
  9. Roman Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science. Number 47 in Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2018. ISBN 978-1-108-41519-4.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.