TIM: An Efficient Temporal Interaction Module for Spiking Transformer (2401.11687v3)
Abstract: Spiking Neural Networks (SNNs), as the third generation of neural networks, have gained prominence for their biological plausibility and computational efficiency, especially in processing diverse datasets. The integration of attention mechanisms, inspired by advancements in neural network architectures, has led to the development of Spiking Transformers. These have shown promise in enhancing SNNs' capabilities, particularly in the realms of both static and neuromorphic datasets. Despite their progress, a discernible gap exists in these systems, specifically in the Spiking Self Attention (SSA) mechanism's effectiveness in leveraging the temporal processing potential of SNNs. To address this, we introduce the Temporal Interaction Module (TIM), a novel, convolution-based enhancement designed to augment the temporal data processing abilities within SNN architectures. TIM's integration into existing SNN frameworks is seamless and efficient, requiring minimal additional parameters while significantly boosting their temporal information handling capabilities. Through rigorous experimentation, TIM has demonstrated its effectiveness in exploiting temporal information, leading to state-of-the-art performance across various neuromorphic datasets. The code is available at https://github.com/BrainCog-X/Brain-Cog/tree/main/examples/TIM.
- Graph-based spatio-temporal feature learning for neuromorphic vision sensing. IEEE Transactions on Image Processing, 29:9084–9098, 2020.
- Lisnn: Improving spiking neural networks with lateral interactions for robust object recognition. In IJCAI, pages 1519–1525. Yokohama, 2020.
- An unsupervised stdp-based spiking neural network inspired by biologically plausible learning rules and connections. Neural Networks, 2023.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2021.
- Exploiting neuron and synapse filter dynamics in spatial temporal learning of deep spiking neural network. arXiv preprint arXiv:2003.02944, 2020.
- Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems, 34:21056–21069, 2021.
- Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2661–2671, 2021.
- Event-based vision: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(1):154–180, 2020.
- End-to-end learning of representations for asynchronous event-based data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5633–5643, 2019.
- Advancing spiking neural networks towards deep residual learning. arXiv preprint arXiv:2112.08954, 2021.
- Optimizing deeper spiking neural networks for dynamic vision sensing. Neural Networks, 144:686–698, 2021.
- Unifying activation-and timing-based learning rules for spiking neural networks. Advances in neural information processing systems, 33:19534–19544, 2020.
- Exploring temporal information dynamics in spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 8308–8316, 2023.
- Efficient processing of spatio-temporal data streams with spiking neural networks. Frontiers in Neuroscience, 14:439, 2020.
- Cifar10-dvs: an event-stream dataset for object classification. Frontiers in neuroscience, 11:309, 2017.
- Differentiable spike: Rethinking gradient-descent for training spiking neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 23426–23439. Curran Associates, Inc., 2021.
- Spikeformer: A novel architecture for training high-performance low-latency spiking neural network. arXiv preprint arXiv:2211.10686, 2022.
- Neuromorphic data augmentation for training spiking neural networks. In European Conference on Computer Vision, pages 631–649. Springer, 2022.
- Event transformer. arXiv preprint arXiv:2204.05172, 2022.
- Input-aware dynamic timestep spiking neural networks for efficient in-memory computing. arXiv preprint arXiv:2305.17346, 2023.
- Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9):1659–1671, 1997.
- Training high-performance low-latency spiking neural networks by differentiation on spike representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12444–12453, 2022.
- Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in neuroscience, 9:437, 2015.
- Learning spatio-temporal representation with pseudo-3d residual networks. In proceedings of the IEEE International Conference on Computer Vision, pages 5533–5541, 2017.
- High speed and high dynamic range video with an event camera. IEEE transactions on pattern analysis and machine intelligence, 43(6):1964–1980, 2019.
- Spikepoint: An efficient point-based spiking neural network for event cameras action recognition. arXiv preprint arXiv:2310.07189, 2023.
- Towards spike-based machine intelligence with neuromorphic computing. Nature, 575(7784):607–617, 2019.
- Going deeper in spiking neural networks: Vgg and residual architectures. Frontiers in neuroscience, 13:95, 2019.
- Backpropagation with biologically plausible spatiotemporal adjustment for training deep spiking neural networks. Patterns, 3(6), 2022.
- Eventmix: An efficient augmentation strategy for event-based data, 2022.
- Brain-inspired neural circuit evolution for spiking neural networks. Proceedings of the National Academy of Sciences, 120(39):e2218173120, 2023.
- Exploiting nonlinear dendritic adaptive computation in training deep spiking neural networks. Neural Networks, 170:190–201, 2024.
- Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489–4497, 2015.
- Attention is all you need. Advances in neural information processing systems, 30, 2017.
- Attention-free spikformer: Mixing spike sequences with simple linear transforms. arXiv preprint arXiv:2308.02557, 2023.
- Vmv-gcn: Volumetric multi-view based graph cnn for event stream classification. IEEE Robotics and Automation Letters, 7(2):1976–1983, 2022.
- Dista: Denoising spiking transformer with intrinsic plasticity and spatiotemporal attention, 2023.
- Temporal-wise attention spiking neural networks for event streams classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10221–10230, 2021.
- Spike-driven transformer, 2023.
- Sparser spiking activity can be better: Feature refine-and-mask spiking neural network for event-based visual recognition. Neural Networks, 166:410–423, 2023.
- Braincog: A spiking neural network based, brain-inspired cognitive intelligence engine for brain-inspired ai and brain simulation. Patterns, 4(8), 2023.
- Glsnn: A multi-layer spiking neural network based on global feedback alignment and local stdp plasticity. Frontiers in Computational Neuroscience, 14:576841, 2020.
- Improving stability and performance of spiking neural networks through enhancing temporal consistency. arXiv preprint arXiv:2305.14174, 2023.
- Deep learning for event-based vision: A comprehensive survey and benchmarks. arXiv preprint arXiv:2302.08890, 2023.
- Spikformer: When spiking neural network meets transformer. arXiv preprint arXiv:2209.15425, 2022.
- Spikingformer: Spike-driven residual learning for transformer-based spiking neural network, 2023.
- Computational event-driven vision sensors for in-sensor spiking neural networks. Nature Electronics, pages 1–9, 2023.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.