AuthNet: Neural Network with Integrated Authentication Logic (2405.15426v1)
Abstract: Model stealing, i.e., unauthorized access and exfiltration of deep learning models, has become one of the major threats. Proprietary models may be protected by access controls and encryption. However, in reality, these measures can be compromised due to system breaches, query-based model extraction or a disgruntled insider. Security hardening of neural networks is also suffering from limits, for example, model watermarking is passive, cannot prevent the occurrence of piracy and not robust against transformations. To this end, we propose a native authentication mechanism, called AuthNet, which integrates authentication logic as part of the model without any additional structures. Our key insight is to reuse redundant neurons with low activation and embed authentication bits in an intermediate layer, called a gate layer. Then, AuthNet fine-tunes the layers after the gate layer to embed authentication logic so that only inputs with special secret key can trigger the correct logic of AuthNet. It exhibits two intuitive advantages. It provides the last line of defense, i.e., even being exfiltrated, the model is not usable as the adversary cannot generate valid inputs without the key. Moreover, the authentication logic is difficult to inspect and identify given millions or billions of neurons in the model. We theoretically demonstrate the high sensitivity of AuthNet to the secret key and its high confusion for unauthorized samples. AuthNet is compatible with any convolutional neural network, where our extensive evaluations show that AuthNet successfully achieves the goal in rejecting unauthenticated users (whose average accuracy drops to 22.03%) with a trivial accuracy decrease (1.18% on average) for legitimate users, and is robust against model transformation and adaptive attacks.
- Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. In 27th USENIX Security Symposium (USENIX Security 18). Baltimore, MD, 1615–1631.
- Speaker recognition based on lightweight neural network for smart home solutions. In Cyberspace Safety and Security: 11th International Symposium, CSS 2019, Guangzhou, China, December 1–3, 2019, Proceedings, Part II 11. Springer, 421–431.
- CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information. ArXiv abs/1810.09076 (2018).
- Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation. ArXiv abs/2106.07849 (2021).
- Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data. ArXiv abs/2101.08030 (2021).
- Hardware-Assisted Intellectual Property Protection of Deep Learning Models. In 57th ACM/IEEE Design Automation Conference, DAC 2020, San Francisco, CA, USA, July 20-24, 2020. IEEE, 1–6.
- Exploring Connections Between Active Learning and Model Extraction. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 1309–1326. https://www.usenix.org/conference/usenixsecurity20/presentation/chandrasekaran
- Mingliang Chen and Min Wu. 2018. Protect Your Deep Neural Networks from Piracy. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS). 1–7. https://doi.org/10.1109/WIFS.2018.8630791
- Effective Ambiguity Attack Against Passport-based DNN Intellectual Property Protection Schemes through Fully Connected Layer Substitution. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8123–8132. https://doi.org/10.1109/CVPR52729.2023.00785
- Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data. In 2018 International Joint Conference on Neural Networks (IJCNN). 1–8.
- New types of deep neural network learning for speech recognition and related applications: an overview. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 8599–8603. https://doi.org/10.1109/ICASSP.2013.6639344
- Understanding Real-world Threats to Deep Learning Models in Android Apps. In The ACM Conference on Computer and Communications Security. 785–799.
- Stealing Neural Networks via Timing Side Channels. ArXiv abs/1812.11720 (2018).
- HotFlip: White-Box Adversarial Examples for Text Classification. In Annual Meeting of the Association for Computational Linguistics.
- Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks. In Advances in Neural Information Processing Systems, Vol. 32.
- Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal Attack for DNN Models. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Zhi-Hua Zhou (Ed.). International Joint Conferences on Artificial Intelligence Organization, 3635–3641. https://doi.org/10.24963/ijcai.2021/500 Main Track.
- Learning both Weights and Connections for Efficient Neural Network. In NIPS.
- DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware. In MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture (Virtual Event, Greece) (MICRO ’21). Association for Computing Machinery, New York, NY, USA, 212–224. https://doi.org/10.1145/3466752.3480112
- Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European conference on computer vision (ECCV). 784–800.
- DRMI: A Dataset Reduction Technology based on Mutual Information for Black-box Attacks. In Proceedings of the 30th USENIX Security Symposium (USENIX) (Vancouver, B.C., Canada).
- Reverse Engineering Convolutional Neural Networks Through Side-channel Information Leaks. 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) (2018), 1–6.
- Yujin Huang and Chunyang Chen. 2022. Smart App Attack: Hacking Deep Learning Models in Android Apps. IEEE Transactions on Information Forensics and Security 17 (2022), 1827–1840. https://doi.org/10.1109/TIFS.2022.3172213
- High Accuracy and High Fidelity Extraction of Neural Networks. In 29th USENIX Security Symposium (USENIX Security 20). 1345–1362.
- PRADA: Protecting Against DNN Model Stealing Attacks. In IEEE European Symposium on Security and Privacy. IEEE, 512–527.
- Watermarking techniques for intellectual property protection. In Proceedings 1998 Design and Automation Conference. 35th DAC. (Cat. No.98CH36175). 776–781. https://doi.org/10.1145/277044.277240
- Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions. In Proceedings of the 39th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 162), Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (Eds.). PMLR, 12403–12422. https://proceedings.mlr.press/v162/lee22e.html
- Accurate and compact large vocabulary speech recognition on mobile devices. (2013).
- Low-Light Image and Video Enhancement Using Deep Learning: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 12 (2022), 9396–9416. https://doi.org/10.1109/TPAMI.2021.3126387
- A Convolutional Neural Network Cascade for Face Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Qiming Li and Ee-Chien Chang. 2006. Zero-knowledge watermark detection resistant to ambiguity attacks. In Workshop on Multimedia & Security.
- Visible Watermark Removal via Self-calibrated Localization and Background Refinement. In Proceedings of the 29th ACM International Conference on Multimedia (Virtual Event, China) (MM ’21). Association for Computing Machinery, New York, NY, USA, 4426–4434. https://doi.org/10.1145/3474085.3475592
- Ambiguity attacks on robust blind image watermarking scheme based on redundant discrete wavelet transform and singular value decomposition. Journal of Electrical Systems and Information Technology 4 (2017), 359–368.
- TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP. In Conference on Empirical Methods in Natural Language Processing.
- Brooks Olney and Robert Karam. 2022. Protecting Deep Neural Network Intellectual Property with Architecture-Agnostic Input Obfuscation. In GLSVLSI ’22: Great Lakes Symposium on VLSI 2022, Irvine CA USA, June 6 - 8, 2022, Ioannis Savidis, Avesta Sasan, Himanshu Thapliyal, and Ronald F. DeMara (Eds.). ACM, 111–115.
- Knockoff Nets: Stealing Functionality of Black-Box Models. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 4949–4958. https://doi.org/10.1109/CVPR.2019.00509
- Husrev Taha Sencar and Nasir D. Memon. 2007. Combatting Ambiguity Attacks via Selective Detection of Embedded Watermarks. IEEE Transactions on Information Forensics and Security 2 (2007), 664–682.
- Machine translation using deep learning: An overview. In 2017 International Conference on Computer, Communications and Electronics (Comptelix). 162–167. https://doi.org/10.1109/COMPTELIX.2017.8003957
- Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security 10, 4 (2015), 736–745.
- Face recognition in mobile devices. International Journal of Computer Applications 73, 2 (2013).
- Shadownet: A secure and efficient on-device model inference system for convolutional neural networks. In 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 1596–1612.
- Mind your weight (s): A large-scale study on insufficient machine learning model protection in mobile apps. In 30th USENIX Security Symposium (USENIX Security 21). 1955–1972.
- CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU. In 2021 IEEE Symposium on Security and Privacy (SP). 1021–1038. https://doi.org/10.1109/SP40001.2021.00098
- Dynamic multi-branch layers for on-device neural machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing 30 (2022), 958–967.
- Deep Serial Number: Computational Watermarking for DNN Intellectual Property Protection. ArXiv abs/2011.08960 (2020).
- Pruning has a disparate impact on model accuracy. In Advances in Neural Information Processing Systems, Vol. 35. 17652–17664.
- Learning Deep Transformer Models for Machine Translation. In Annual Meeting of the Association for Computational Linguistics. https://api.semanticscholar.org/CorpusID:174799399
- Leaky DNN: Stealing Deep-Learning Model Secret with GPU Context-Switching Side-Channel. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) (2020), 125–137.
- I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators. Proceedings of the 34th Annual Computer Security Applications Conference (2018).
- Intellectual Property Protection for Deep Learning Models: Taxonomy, Methods, Attacks, and Evaluations. IEEE Trans. Artif. Intell. 3, 6 (2022), 908–923.
- cDeepArch: A compact deep neural network architecture for mobile sensing. IEEE/ACM Transactions on Networking 27, 5 (2019), 2043–2055.
- From Facial Parts Responses to Face Detection: A Deep Learning Approach. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
- DeepEM: Deep Neural Networks Model Recovery through EM Side-Channel Information Leakage. 2020 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (2020), 209–218.
- Efficient Neural Network Robustness Certification with General Activation Functions. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31.
- Passport-Aware Normalization for Deep Model Protection. In Proceedings of the 34th International Conference on Neural Information Processing Systems. 10 pages.
- Adversarial Attacks on Deep-learning Models in Natural Language Processing. ACM Transactions on Intelligent Systems and Technology (TIST) 11 (2019), 1 – 41.
- Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments. ACM Trans. Intell. Syst. Technol. 9, 5, Article 49 (apr 2018), 28 pages. https://doi.org/10.1145/3178115
- A survey of deep learning on mobile devices: Applications, optimizations, challenges, and research opportunities. Proc. IEEE 110, 3 (2022), 334–354.