When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence (2311.14005v1)
Abstract: Artificial intelligence, and specifically deep neural networks (DNNs), has rapidly emerged in the past decade as the standard for several tasks from specific advertising to object detection. The performance offered has led DNN algorithms to become a part of critical embedded systems, requiring both efficiency and reliability. In particular, DNNs are subject to malicious examples designed in a way to fool the network while being undetectable to the human observer: the adversarial examples. While previous studies propose frameworks to implement such attacks in black box settings, those often rely on the hypothesis that the attacker has access to the logits of the neural network, breaking the assumption of the traditional black box. In this paper, we investigate a real black box scenario where the attacker has no access to the logits. In particular, we propose an architecture-agnostic attack which solve this constraint by extracting the logits. Our method combines hardware and software attacks, by performing a side-channel attack that exploits electromagnetic leakages to extract the logits for a given input, allowing an attacker to estimate the gradients and produce state-of-the-art adversarial examples to fool the targeted neural network. Through this example of adversarial attack, we demonstrate the effectiveness of logits extraction using side-channel as a first step for more general attack frameworks requiring either the logits or the confidence scores.
- TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI’16). USENIX Association, USA, 265–283.
- TensorFlow: A system for large-scale machine learning. ArXiv abs/1605.08695 (2016).
- Toward an Internet of Battlefield Things: A Resilience Perspective. Computer 51, 11 (2018), 24–36. https://doi.org/10.1109/MC.2018.2876048
- Template Attacks in Principal Subspaces. In Cryptographic Hardware and Embedded Systems - CHES 2006, Louis Goubin and Mitsuru Matsui (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 1–14.
- Pallavi Sunil Bangare and Kishor P. Patil. 2022. Security Issues and Challenges in Internet of Things (IOT) System. In 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). 91–94. https://doi.org/10.1109/ICACITE53722.2022.9823709
- CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel. In USENIX Security Symposium.
- Test Vector Leakage Assessment ( TVLA ) methodology in practice ( Extended Abstract ).
- Deep learning for side-channel analysis and introduction to ASCAD database. Journal of Cryptographic Engineering 10, 2 (2020), 163–188. https://doi.org/10.1007/s13389-019-00220-8
- Impact of Low-Bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks. 2019 International Conference on Cyberworlds (CW) (2019), 308–315.
- Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. ArXiv abs/1712.04248 (2017).
- Optimal side-channel attacks for multivariate leakages and multiple models. Journal of Cryptographic Engineering 7 (2017), 331–341.
- Convolutional Neural Networks with Data Augmentation Against Jitter-Based Countermeasures - Profiling Attacks Without Pre-processing. In Cryptographic Hardware and Embedded Systems - CHES 2017 - 19th International Conference, Taipei, Taiwan, September 25-28, 2017, Proceedings (Lecture Notes in Computer Science), Wieland Fischer and Naofumi Homma (Eds.), Vol. 10529. Springer, 45–68. https://doi.org/10.1007/978-3-319-66787-4_3
- Kernel Discriminant Analysis for Information Extraction in the Presence of Masking. In Smart Card Research and Advanced Applications, Kerstin Lemke-Rust and Michael Tunstall (Eds.). Springer International Publishing, Cham, 1–22.
- Nicholas Carlini and David A. Wagner. 2016. Towards Evaluating the Robustness of Neural Networks. 2017 IEEE Symposium on Security and Privacy (SP) (2016), 39–57.
- Template Attacks. In Cryptographic Hardware and Embedded Systems - CHES 2002, 4th International Workshop, Redwood Shores, CA, USA, August 13-15, 2002, Revised Papers (Lecture Notes in Computer Science), Vol. 2523. Springer, 13–28. https://doi.org/10.1007/3-540-36400-5_3
- ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (2017).
- Marios Choudary and Markus Kuhn. 2015. Efficient Stochastic Methods: Profiled Attacks Beyond 8 Bits. In Smart Card Research and Advanced Applications, Marc Joye and Amir Moradi (Eds.). Springer International Publishing, Cham, 85–103.
- Omar Choudary and Markus G. Kuhn. 2014. Efficient Template Attacks. In Smart Card Research and Advanced Applications, Aurélien Francillon and Pankaj Rohatgi (Eds.). Springer International Publishing, Cham, 253–270.
- Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey. Proc. IEEE 108, 4 (2020), 485–532. https://doi.org/10.1109/JPROC.2020.2976475
- Evaluation of Parameter-based Attacks against Embedded Neural Networks with Laser Injection. arXiv:cs.CR/2304.12876
- Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015).
- Deep Learning. MIT Press. http://www.deeplearningbook.org/
- Generative Adversarial Nets. In Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (Eds.), Vol. 27. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
- Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572 (2014).
- Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. arXiv: Computer Vision and Pattern Recognition (2015).
- Good Is Not Good Enough. In Cryptographic Hardware and Embedded Systems – CHES 2014, Lejla Batina and Matthew Robshaw (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 55–74.
- Distilling the Knowledge in a Neural Network. ArXiv abs/1503.02531 (2015).
- MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. ArXiv abs/1704.04861 (2017).
- Weiwei Hu and Ying Tan. 2017. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. ArXiv abs/1702.05983 (2017).
- Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), 2261–2269.
- Black-box Adversarial Attacks with Limited Queries and Information. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, 2137–2146. https://proceedings.mlr.press/v80/ilyas18a.html
- Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors. In International Conference on Learning Representations. https://openreview.net/forum?id=BkMiWhR5K7
- High Accuracy and High Fidelity Extraction of Neural Networks. In USENIX Security Symposium.
- A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters. In Smart Card Research and Advanced Applications, Ileana Buhan and Tobias Schneider (Eds.). Springer International Publishing, Cham, 45–65.
- A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters. In Smart Card Research and Advanced Applications, Ileana Buhan and Tobias Schneider (Eds.). Springer International Publishing, Cham, 45–65.
- Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1412.6980
- Adversarial examples in the physical world. ArXiv abs/1607.02533 (2016).
- Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324. https://doi.org/10.1109/5.726791
- Fault injection attack on deep neural network. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 131–138. https://doi.org/10.1109/ICCAD.2017.8203770
- Jianjia Ma. 2020. A higher-level Neural Network library on Microcontrollers (NNoM). https://doi.org/10.5281/zenodo.4158710
- Breaking Cryptographic Implementations Using Deep Learning Techniques. In Security, Privacy, and Applied Cryptography Engineering - 6th International Conference, SPACE 2016, Hyderabad, India, December 14-18, 2016, Proceedings (Lecture Notes in Computer Science), Claude Carlet, Anwar Hasan, and Vishal Saraswat (Eds.), Vol. 10076. Springer, 3–26. https://doi.org/10.1007/978-3-319-49445-6_1
- Power analysis attacks - revealing the secrets of smart cards. Springer.
- Deep Learning Side-Channel Analysis on Large-Scale Traces. In Computer Security – ESORICS 2020, Liqun Chen, Ninghui Li, Kaitai Liang, and Steve Schneider (Eds.). Springer International Publishing, Cham, 440–460.
- Information Bounds and Convergence Rates for Side-Channel Security Evaluators. Cryptology ePrint Archive, Paper 2022/490. https://eprint.iacr.org/2022/490 https://eprint.iacr.org/2022/490.
- Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv preprint arXiv:1610.00768 (2018).
- The Limitations of Deep Learning in Adversarial Settings. 2016 IEEE European Symposium on Security and Privacy (EuroS&P) (2015), 372–387.
- PyTorch: An Imperative Style, High-Performance Deep Learning Library. Curran Associates Inc., Red Hook, NY, USA.
- DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. 2022 IEEE Symposium on Security and Privacy (SP) (2021), 1157–1174.
- You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), 779–788.
- David Rolnick and Konrad Paul Kording. 2019. Reverse-engineering deep ReLU networks. In International Conference on Machine Learning.
- Accelerating RNN-Based Speech Enhancement on a Multi-core MCU with Mixed FP16-INT8 Post-training Quantization. In Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Irena Koprinska, Paolo Mignone, Riccardo Guidotti, Szymon Jaroszewicz, Holger Fröning, Francesco Gullo, Pedro M. Ferreira, Damian Roqueiro, Gaia Ceddia, Slawomir Nowaczyk, João Gama, Rita Ribeiro, Ricard Gavaldà, Elio Masciari, Zbigniew Ras, Ettore Ritacco, Francesca Naretto, Andreas Theissler, Przemyslaw Biecek, Wouter Verbeke, Gregor Schiele, Franz Pernkopf, Michaela Blott, Ilaria Bordino, Ivan Luciano Danesi, Giovanni Ponti, Lorenzo Severini, Annalisa Appice, Giuseppina Andresini, Ibéria Medeiros, Guilherme Graça, Lee Cooper, Naghmeh Ghazaleh, Jonas Richiardi, Diego Saldana, Konstantinos Sechidis, Arif Canakoglu, Sara Pido, Pietro Pinoli, Albert Bifet, and Sepideh Pashami (Eds.). Springer Nature Switzerland, Cham, 606–617.
- FPGA-Based Accelerators of Deep Learning Networks for Learning and Classification: A Review. IEEE Access 7 (2019), 7823–7859.
- Intriguing properties of neural networks. CoRR abs/1312.6199 (2013).
- Stealing Machine Learning Models via Prediction APIs. In USENIX Security Symposium.
- Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. ArXiv abs/1802.05666 (2018).
- FINN: A Framework for Fast, Scalable Binarized Neural Network Inference. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA ’17). Association for Computing Machinery, New York, NY, USA, 65–74. https://doi.org/10.1145/3020078.3021744
- Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain. ArXiv abs/2207.04209 (2022).
- Raphael Zingg and Matthias Rosenthal. 2020. Artificial intelligence on microcontrollers. https://digitalcollection.zhaw.ch/handle/11475/20055 Embedded World Conference 2020, Nürnberg, 25.-27. Februar 2020.