Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification (2403.06581v1)

Published 11 Mar 2024 in cs.CR

Abstract: The surge in popularity of ML has driven significant investments in training Deep Neural Networks (DNNs). However, these models that require resource-intensive training are vulnerable to theft and unauthorized use. This paper addresses this challenge by introducing DNNShield, a novel approach for DNN protection that integrates seamlessly before training. DNNShield embeds unique identifiers within the model architecture using specialized protection layers. These layers enable secure training and deployment while offering high resilience against various attacks, including fine-tuning, pruning, and adaptive adversarial attacks. Notably, our approach achieves this security with minimal performance and computational overhead (less than 5\% runtime increase). We validate the effectiveness and efficiency of DNNShield through extensive evaluations across three datasets and four model architectures. This practical solution empowers developers to protect their DNNs and intellectual property rights.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. C. Abhishek, M. Ankit, and S. Ankur, “Hardware-Assisted Intellectual Property Protection of Deep Learning Models,” DAC, 2020.
  2. Y. Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet, “Turning Your Weakness into a Strength: Watermarking Deep Neural Networks by Backdooring,” USENIX Security, 2018.
  3. E. Benevento, D. Aloini, and N. Squicciarini, “Towards a real-time prediction of waiting times in emergency departments: A comparative analysis of machine learning techniques,” IJF, 2023.
  4. Y. Bengio, P. Frasconi, and P. Simard, “The problem of learning long-term dependencies in recurrent networks,” ICNN, 1993.
  5. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language Models are Few-Shot Learners,” NeurIPS, 2020.
  6. X. Cao, J. Jia, and N. Z. Gong, “IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary,” ASIACCS, 2021.
  7. C.-Y. Chang and S.-J. Su, “A neural-network-based robust watermarking scheme,” SMC, 2005.
  8. H. Chen, C. Fu, B. D. Rouhani, J. Zhao, and F. Koushanfar, “DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks,” ISCA, 2019.
  9. H. Chen, B. D. Rouhani, C. Fu, J. Zhao, and F. Koushanfar, “DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models,” ICMR, 2019.
  10. J. Chen, J. Wang, T. Peng, Y. Sun, P. Cheng, S. Ji, X. Ma, B. Li, and D. Song, “Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models,” IEEE SP, 2022.
  11. Y. Chen, J. Tian, X. Chen, and J. Zhou, “Effective Ambiguity Attack Against Passport-based DNN Intellectual Property Protection Schemes through Fully Connected Layer Substitution,” CVPR, 2023.
  12. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural Language Processing (almost) from Scratch,” JMLR, 2011.
  13. P. Contributors, “TORCH.MUL,” 2023, https://pytorch.org/docs/stable/generated/torch.mul.html.
  14. B. Darvish Rouhani, H. Chen, and F. Koushanfar, “DeepSigns: An End-to-End Watermarking Framework for Ownership Protection of Deep Neural Networks,” ASPLOS, 2019.
  15. L. Deng, “The MNIST Database of Handwritten Digit Images for Machine Learning Research,” IEEE Signal Processing Magazine, 2012.
  16. A. S. Dhanjal and W. Singh, “A comprehensive survey on automatic speech recognition using neural networks,” Multimedia Tools and Applications, 2023.
  17. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” arXiv preprint arXiv:2010.11929, 2021.
  18. G. Estevam, L. M. Palma, L. R. Silva, J. E. Martina, and M. Vigil, “Accurate and decentralized timestamping using smart contracts on the Ethereum blockchain,” Inf Process Manag., 2021.
  19. L. Fan, K. W. Ng, and C. S. Chan, “Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks,” NeurIPS, 2019.
  20. L. Fan, K. W. Ng, C. S. Chan, and Q. Yang, “DeepIPR: Deep Neural Network Ownership Verification With Passports,” TPAMI, 2022.
  21. J. Guo and M. Potkonjak, “Watermarking Deep Neural Networks for Embedded Systems,” ICCAD, 2018.
  22. D. Han, Z. Wang, W. Chen, K. Wang, R. Yu, S. Wang, H. Zhang, Z. Wang, M. Jin, J. Yang, X. Shi, and X. Yin, “Anomaly Detection in the Open World: Normality Shift Detection, Explanation, and Adaptation,” NDSS, 2023.
  23. S. Han, J. Pool, J. Tran, and W. Dally, “Learning both Weights and Connections for Efficient Neural Networks,” NeurIPS, 2015.
  24. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” CVPR, 2016.
  25. A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan et al., “Searching for MobileNetV3,” ICCVW, 2019.
  26. F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ¡0.5MB model size,” arXiv preprint arXiv:1602.07360, 2016.
  27. H. Jia, C. A. Choquette-Choo, V. Chandrasekaran, and N. Papernot, “Entangled Watermarks as a Defense against Model Extraction,” USENIX Security, 2021.
  28. T. Krauß and A. Dmitrienko, “MESAS: Poisoning Defense for Federated Learning Resilient against Adaptive Attackers,” CCS, 2023.
  29. A. Krizhevsky, G. Hinton et al., “Learning Multiple Layers of Features from Tiny Images,” Citeseer, 2009.
  30. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” NeurIPS, 2012.
  31. Y. Li, B. Tondi, and M. Barni, “Spread-Transform Dither Modulation Watermarking of Deep Neural Network,” JISA, 2021.
  32. Z. Li, C. Hu, Y. Zhang, and S. Guo, “How to Prove Your Model Belongs to You: A Blind-Watermark Based Framework to Protect Intellectual Property of DNN,” ACSAC, 2019.
  33. L. Liu, W. Ouyang, X. Wang, P. Fieguth, J. Chen, X. Liu, and M. Pietikäinen, “Deep Learning for Generic Object Detection: A Survey,” IJCV, 2020.
  34. E. L. Merrer, P. Pérez, and G. Trédan, “Adversarial Frontier Stitching for Remote Neural Network Watermarking,” Neural Computing and Applications, 2019.
  35. L. Nils, Z. Yuxuan, and K. Florian, “Deep Neural Network Fingerprinting by Conferrable Adversarial Examples,” ICLR, 2021.
  36. NVIDIA, P. Vingelmann, and F. H. Fitzek, “Cuda, release: 10.2.89,” 2020. [Online]. Available: https://developer.nvidia.com/cuda-toolkit
  37. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” NeurIPS, 2019.
  38. P. Rieger, T. Krauß, M. Miettinen, A. Dmitrienko, and A.-R. Sadeghi, “CrowdGuard: Federated Backdoor Detection in Federated Learning,” arXiv preprint arXiv:2210.07714, 2023.
  39. S. Shan, W. Ding, E. Wenger, H. Zheng, and B. Y. Zhao, “Post-Breach Recovery: Protection against White-Box Adversarial Examples for Leaked DNN Models,” CCS, 2022.
  40. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition.” ICLR, 2015.
  41. J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” Neural Networks, 2012.
  42. N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?” IEEE TMI, 2016.
  43. E. Tartaglione, M. Grangetto, D. Cavagnino, and M. Botta, “Delving in the loss landscape to embed robust watermarks into neural networks,” ICPR, 2021.
  44. The Linux Foundation, “Pytorch,” 2022, https://pytorch.org.
  45. N. C. Thompson, K. Greenewald, K. Lee, and G. F. Manso, “The Computational Limits of Deep Learning,” arXiv preprint arXiv:2007.05558, 2022.
  46. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “LLaMA: Open and Efficient Foundation Language Models,” arXiv preprint arXiv:2302.13971, 2023.
  47. Y. Uchida, Y. Nagai, S. Sakazawa, and S. Satoh, “Embedding Watermarks into Deep Neural Networks,” ICMR, 2017.
  48. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention Is All You Need,” arXiv preprint arXiv:1706.03762, 2023.
  49. J. Wang, H. Wu, X. Zhang, and Y. Yao, “Watermarking in Deep Neural Networks via Error Back-propagation,” Electronic Imaging, 2020.
  50. T. Wang and F. Kerschbaum, “RIGA: Covert and Robust White-Box Watermarking of Deep Neural Networks,” WWW, 2021.
  51. C. Xie, P. Yi, B. Zhang, and F. Zou, “DeepMark: Embedding Watermarks into Deep Neural Network Using Pruning,” ICTAI, 2021.
  52. K. Yang, R. Wang, and L. Wang, “MetaFinger: Fingerprinting the Deep Neural Networks with Meta-training,” IJCAI, 2022.
  53. J. Zhang, Z. Gu, J. Jang, H. Wu, M. P. Stoecklin, H. Huang, and I. Molloy, “Protecting Intellectual Property of Deep Neural Networks with Watermarking,” ASIACCS, 2018.
  54. J. Zhang, D. Chen, J. Liao, H. Fang, W. Zhang, W. Zhou, H. Cui, and N. Yu, “Model Watermarking for Image Processing Networks,” AAAI, 2020.
  55. J. Zhang, D. Chen, J. Liao, W. Zhang, G. Hua, and N. Yu, “Passport-aware Normalization for Deep Model Protection,” NeurIPS, 2020.

Summary

We haven't generated a summary for this paper yet.