Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Copyright Protection in Buyer-seller Environment (2312.05262v1)

Published 5 Dec 2023 in cs.CR and cs.LG

Abstract: Training a deep neural network (DNN) requires a high computational cost. Buying models from sellers with a large number of computing resources has become prevailing. However, the buyer-seller environment is not always trusted. To protect the neural network models from leaking in an untrusted environment, we propose a novel copyright protection scheme for DNN using an input-sensitive neural network (ISNN). The main idea of ISNN is to make a DNN sensitive to the key and copyright information. Therefore, only the buyer with a correct key can utilize the ISNN. During the training phase, we add a specific perturbation to the clean images and mark them as legal inputs, while the other inputs are treated as illegal input. We design a loss function to make the outputs of legal inputs close to the true ones, while the illegal inputs are far away from true results. Experimental results demonstrate that the proposed scheme is effective, valid, and secure.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. “End-to-end learning method for self-driving cars with trajectory recovery using a path-following function,” in 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019, pp. 1–8.
  2. “Deep face recognition,” in BMVC, 2015, pp. 41.1–41.12.
  3. “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
  4. “A method for obtaining digital signatures and public-key cryptosystems,” Communications of the ACM, vol. 21, no. 2, pp. 120–126, 1978.
  5. “The rijndael block cipher: Aes proposal,” in First candidate conference (AeS1), 1999, pp. 343–348.
  6. “A proposed mode for triple-des encryption,” IBM Journal of Research and Development, vol. 40, no. 2, pp. 253–262, 1996.
  7. “Twofish: a 128-bit block cipher,” AES submission, 1998.
  8. “Hardware-assisted intellectual property protection of deep learning models,” in 2020 57th ACM/IEEE Design Automation Conference (DAC). IEEE, 2020, pp. 1–6.
  9. “Probabilistic selective encryption of convolutional neural networks for hierarchical services,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 2205–2214.
  10. “Watermarking deep neural networks for embedded systems,” in 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2018, pp. 1–8.
  11. “Have you stolen my model? evasion attacks against deep neural network watermarking techniques,” CoRR, vol. abs/1809.00615, 2018.
  12. “Adversarial frontier stitching for remote neural network watermarking,” Neural Computing and Applications, vol. 32, no. 13, pp. 9233–9244, 2020.
  13. “Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019, pp. 485–497.
  14. “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017, pp. 1765–1773.
  15. Andrew Chi-Chih Yao, “How to generate and exchange secrets,” in 27th Annual Symposium on Foundations of Computer Science (SFCS). IEEE, 1986, pp. 162–167.
  16. Craig Gentry, “Fully homomorphic encryption using ideal lattices,” in Proceedings of the forty-first annual ACM symposium on Theory of computing, 2009, pp. 169–178.
  17. Adi Shamir, “How to share a secret,” Communications of the ACM, vol. 22, no. 11, pp. 612–613, 1979.

Summary

  • The paper presents an input-sensitive neural network approach that embeds copyright within DNN operation, ensuring only authorized use by intended buyers.
  • It employs a methodology that perturbs training inputs and rigorously tests resilience using the ResNet18 model on the CIFAR10 dataset against various attack vectors.
  • Experimental results demonstrate a balance between attack resistance and model utility, highlighting optimal perturbation intensities and label-consistent training methods.

Novel Protection Scheme for Deep Neural Network Copyright in Buyer-Seller Environments

Overview of the Proposed Scheme

In an era where deep neural network (DNN) models are deemed valuable software assets, ensuring their security, especially in buyer-seller transactions, is paramount. The paper introduces an innovative copyright protection scheme designed for DNN models, leveraging an input-sensitive neural network (ISNN) concept. This approach uniquely embeds copyright protection directly into the model's operation, making it sensitive to a specific key and copyright information, hence usable solely by the intended buyer. The foundational strategy involves perturbing input images during the training phase, marking them as either legal or illegal inputs depending on their conformity with a predetermined key. This approach effectively safeguards the DNN model against unauthorized use within untrusted environments.

Threat Model and Design Goals

The threat model addresses scenarios where adversaries have complete access to the model's details but not the secretive key. It considers potential attacks like retraining, forging copyright, and reverse iterating attacks, illustrating the extent to which adversaries might go to misuse or misappropriate DNN models. Conversely, the design goals focus on verifiability, low-complexity, flexibility, crypticity, fidelity, effectiveness, and security. These goals outline a framework for evaluating the proposed scheme's ability to deliver and utilize DNNs securely in untrusted environments.

Proposed Methodology

The core methodology revolves around making the model inputs sensitive such that only inputs with a specific perturbation, known to the buyer through a secure key, are considered legal. The paper details a sophisticated process spanning copyright embedding, dataset preprocessing, and model training. This process uniquely intertwines creating a secure method for protecting model copyright without the need for traditional encryption or decryption during inference, hence ensuring both security and practicality.

Experimental Results

The empirical analysis utilized the ResNet18 model over the CIFAR10 dataset, showcasing the scheme's effectiveness against various attack strategies while maintaining the model's utility for legitimate users. Notably:

  • The scheme demonstrated resilience against retraining and forging copyright attacks, with the adversary's success rates significantly hindered.
  • A distinction is made between label-consistent and label-inconsistent methods, with the former showing an optimal balance between attack resilience and model utility at certain perturbation intensities.
  • The experiments underscore the importance of choosing an appropriate perturbation intensity and training method, as these factors critically influence the model's security and fidelity.

Conclusion and Future Implications

The paper presents a robust and novel approach to DNN model copyright protection in buyer-seller environments, addressing a significant need in the AI domain. The proposed ISNN-based scheme offers a promising direction for protecting valuable software commodities without compromising operational efficiency or user experience. Looking forward, this research opens avenues for further exploration into optimizing perturbation strategies, expanding the model applicability across different architectures and use cases, and enhancing the scheme's resistance to increasingly sophisticated attacks. This contribution not only underscores the importance of model security in the contemporary AI landscape but also lays the groundwork for future innovations in digital asset protection.