Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples (2209.03358v3)
Abstract: Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance. However, unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped. In this work, we focus on advancing the adversarial attack side of SNNs and make three major contributions. First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs. Second, using the best surrogate gradient technique, we analyze the transferability of adversarial attacks on SNNs and other state-of-the-art architectures like Vision Transformers (ViTs) and Big Transfer Convolutional Neural Networks (CNNs). We demonstrate that the adversarial examples created by non-SNN architectures are not misclassified often by SNNs. Third, due to the lack of an ubiquitous white-box attack that is effective across both the SNN and CNN/ViT domains, we develop a new white-box attack, the Auto Self-Attention Gradient Attack (Auto-SAGA). Our novel attack generates adversarial examples capable of fooling both SNN and non-SNN models simultaneously. Auto-SAGA is as much as $91.1\%$ more effective on SNN/ViT model ensembles and provides a $3\times$ boost in attack effectiveness on adversarially trained SNN ensembles compared to conventional white-box attacks like Auto-PGD. Our experiments and analyses are broad and rigorous covering three datasets (CIFAR-10, CIFAR-100 and ImageNet), five different white-box attacks and nineteen classifier models (seven for each CIFAR dataset and five models for ImageNet).
- Quantifying Attention Flow in Transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4190–4197.
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning, 274–283.
- Long short-term memory and learning-to-learn in networks of spiking neurons. Advances in neural information processing systems, 31.
- Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432.
- Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39–57. Ieee.
- Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, 2206–2216. PMLR.
- Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro, 38(1): 82–99.
- Advancing neuromorphic computing with loihi: A survey of results and outlook. Proceedings of the IEEE, 109(5): 911–934.
- Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International joint conference on neural networks (IJCNN), 1–8. ieee.
- Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 9185–9193.
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations.
- Securing deep spiking neural networks against adversarial attacks through inherent structural parameters. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), 774–779. IEEE.
- Encoding, model, and architecture: Systematic optimization for spiking neural network in fpgas. In 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), 1–9. IEEE.
- Exploiting neuron and synapse filter dynamics in spatial temporal learning of deep spiking neural network. In 29th International Joint Conference on Artificial Intelligence, IJCAI 2020, 2799–2806. International Joint Conferences on Artificial Intelligence.
- Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems, 34: 21056–21069.
- Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2661–2671.
- Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
- Horowitz, M. 2014. 1.1 computing’s energy problem (and what we can do about it). In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 10–14. IEEE.
- Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems, 35: 26565–26577.
- Exploring temporal information dynamics in spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 8308–8316.
- Big Transfer (BiT): General Visual Representation Learning. Lecture Notes in Computer Science, 491–507.
- Learning multiple layers of features from tiny images.
- Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105.
- Efficient processing of spatio-temporal data streams with spiking neural networks. Frontiers in Neuroscience, 14: 439.
- Hire-snn: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5209–5218.
- Exploring adversarial attack in spiking neural networks with spike-compatible gradient. IEEE transactions on neural networks and learning systems.
- Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770.
- Exploring the connection between binary and spiking neural networks. Frontiers in Neuroscience, 14: 535.
- Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations.
- Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples. Entropy, 23: 1359.
- On the robustness of vision transformers to adversarial examples. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 7838–7847.
- Besting the Black-Box: Barrier Zones for Adversarial Example Defense. IEEE Access, 10: 1451–1474.
- Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6): 51–63.
- DIET-SNN: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Transactions on Neural Networks and Learning Systems.
- Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation. In International Conference on Learning Representations.
- NxTF: An API and compiler for deep spiking neural networks on Intel Loihi. ACM Journal on Emerging Technologies in Computing Systems (JETC), 18(3): 1–22.
- A comprehensive analysis on adversarial robustness of spiking neural networks. In 2019 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE.
- Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations. In European Conference on Computer Vision, 399–414. Springer.
- A Survey on Neuromorphic Computing: Models and Hardware. IEEE Circuits and Systems Magazine, 22(2): 6–35.
- Slayer: Spike layer error reassignment in time. Advances in neural information processing systems, 31.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
- Reinforcement co-learning of deep and spiking neural networks for energy-efficient mapless navigation with neuromorphic hardware. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 6090–6097. IEEE.
- Deep learning in spiking neural networks. Neural networks, 111: 47–63.
- On adaptive attacks to adversarial example defenses. Advances in Neural Information Processing Systems, 33: 1633–1645.
- Better diffusion models further improve adversarial training. arXiv preprint arXiv:2302.04638.
- Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in neuroscience, 12: 331.
- Direct training for spiking neural networks: Faster, larger, better. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 1311–1318.
- Superspike: Supervised learning in multilayer spiking neural networks. Neural computation, 30(6): 1514–1541.
- The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks. Neural computation, 33(4): 899–925.
- Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, 7472–7482. PMLR.
- Attacks which do not kill training make adversarial learning stronger. In International conference on machine learning, 11278–11287. PMLR.
- Temporal spike sequence learning via backpropagation for deep spiking neural networks. Advances in Neural Information Processing Systems, 33: 12022–12033.
- Going deeper with directly-trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 11062–11070.
- Nuo Xu (37 papers)
- Kaleel Mahmood (16 papers)
- Haowen Fang (12 papers)
- Ethan Rathbun (8 papers)
- Caiwen Ding (98 papers)
- Wujie Wen (37 papers)