Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pursing the Sparse Limitation of Spiking Deep Learning Structures (2311.12060v1)

Published 18 Nov 2023 in cs.NE

Abstract: Spiking Neural Networks (SNNs), a novel brain-inspired algorithm, are garnering increased attention for their superior computation and energy efficiency over traditional artificial neural networks (ANNs). To facilitate deployment on memory-constrained devices, numerous studies have explored SNN pruning. However, these efforts are hindered by challenges such as scalability challenges in more complex architectures and accuracy degradation. Amidst these challenges, the Lottery Ticket Hypothesis (LTH) emerges as a promising pruning strategy. It posits that within dense neural networks, there exist winning tickets or subnetworks that are sparser but do not compromise performance. To explore a more structure-sparse and energy-saving model, we investigate the unique synergy of SNNs with LTH and design two novel spiking winning tickets to push the boundaries of sparsity within SNNs. Furthermore, we introduce an innovative algorithm capable of simultaneously identifying both weight and patch-level winning tickets, enabling the achievement of sparser structures without compromising on the final model's performance. Through comprehensive experiments on both RGB-based and event-based datasets, we demonstrate that our spiking lottery ticket achieves comparable or superior performance even when the model structure is extremely sparse.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Hao Cheng (190 papers)
  2. Jiahang Cao (39 papers)
  3. Erjia Xiao (13 papers)
  4. Mengshu Sun (41 papers)
  5. Le Yang (69 papers)
  6. Jize Zhang (19 papers)
  7. Xue Lin (92 papers)
  8. Bhavya Kailkhura (108 papers)
  9. Kaidi Xu (85 papers)
  10. Renjing Xu (72 papers)

Summary

We haven't generated a summary for this paper yet.