Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Q-SNNs: Quantized Spiking Neural Networks (2406.13672v1)

Published 19 Jun 2024 in cs.CV

Abstract: Brain-inspired Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an asynchronous event-driven manner, offering an energy-efficient paradigm for the next generation of machine intelligence. However, the current focus within the SNN community prioritizes accuracy optimization through the development of large-scale models, limiting their viability in resource-constrained and low-power edge devices. To address this challenge, we introduce a lightweight and hardware-friendly Quantized SNN (Q-SNN) that applies quantization to both synaptic weights and membrane potentials. By significantly compressing these two key elements, the proposed Q-SNNs substantially reduce both memory usage and computational complexity. Moreover, to prevent the performance degradation caused by this compression, we present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory. Experimental evaluations on various datasets, including static and neuromorphic, demonstrate that our Q-SNNs outperform existing methods in terms of both model size and accuracy. These state-of-the-art results in efficiency and efficacy suggest that the proposed method can significantly improve edge intelligent computing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wenjie Wei (14 papers)
  2. Yu Liang (57 papers)
  3. Ammar Belatreche (11 papers)
  4. Yichen Xiao (10 papers)
  5. Honglin Cao (5 papers)
  6. Zhenbang Ren (3 papers)
  7. Guoqing Wang (95 papers)
  8. Malu Zhang (43 papers)
  9. Yang Yang (884 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.