Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Steganographic Passport: An Owner and User Verifiable Credential for Deep Model IP Protection Without Retraining (2404.02889v1)

Published 3 Apr 2024 in cs.CR and cs.CV

Abstract: Ensuring the legal usage of deep models is crucial to promoting trustable, accountable, and responsible artificial intelligence innovation. Current passport-based methods that obfuscate model functionality for license-to-use and ownership verifications suffer from capacity and quality constraints, as they require retraining the owner model for new users. They are also vulnerable to advanced Expanded Residual Block ambiguity attacks. We propose Steganographic Passport, which uses an invertible steganographic network to decouple license-to-use from ownership verification by hiding the user's identity images into the owner-side passport and recovering them from their respective user-side passports. An irreversible and collision-resistant hash function is used to avoid exposing the owner-side passport from the derived user-side passports and increase the uniqueness of the model signature. To safeguard both the passport and model's weights against advanced ambiguity attacks, an activation-level obfuscation is proposed for the verification branch of the owner's model. By jointly training the verification and deployment branches, their weights become tightly coupled. The proposed method supports agile licensing of deep models by providing a strong ownership proof and license accountability without requiring a separate model retraining for the admission of every new user. Experiment results show that our Steganographic Passport outperforms other passport-based deep model protection methods in robustness against various known attacks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 126–135, 2017.
  2. Dynamic ReLU. In European Conference on Computer Vision, pages 351–367. Springer, 2020.
  3. Effective ambiguity attack against passport-based dnn intellectual property protection schemes through fully connected layer substitution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8123–8132, 2023.
  4. Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3606–3613, 2014.
  5. Rethinking deep neural network ownership verification: Embedding passports to defeat ambiguity attacks. Advances in neural information processing systems, 32, 2019.
  6. DeepIPR: Deep neural network ownership verification with passports. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6122–6139, 2021.
  7. Watermarking neural network with compensation mechanism. In Knowledge Science, Engineering and Management: 13th International Conference, KSEM 2020, Hangzhou, China, August 28–30, 2020, Proceedings, Part II 13, pages 363–375. Springer, 2020.
  8. Reversible watermarking in deep convolutional neural networks for integrity authentication. In Proceedings of the 28th ACM International Conference on Multimedia, pages 2273–2280, 2020.
  9. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  10. Sensitive-sample fingerprinting of deep neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4729–4737, 2019.
  11. Unambiguous and high-fidelity backdoor watermarking for deep neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2023.
  12. Batch Normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. pmlr, 2015.
  13. Monitoring AI services for misuse. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 597–607, 2021.
  14. Entangled watermarks as a defense against model extraction. In 30th USENIX Security Symposium (USENIX Security 21), pages 1937–1954, 2021.
  15. HiNet: deep image hiding by invertible network. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4733–4742, 2021.
  16. Learning multiple layers of features from tiny images. 2009.
  17. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
  18. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006.
  19. Microsoft COCO: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014.
  20. Trapdoor normalization with irreversible ownership verification. In International Conference on Machine Learning, pages 22177–22187. PMLR, 2023.
  21. Threats, attacks, and defenses in machine unlearning: A survey. arXiv preprint arXiv:2403.13682, 2024.
  22. Large-capacity image steganography based on invertible neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10816–10825, 2021.
  23. Protecting intellectual property of generative adversarial networks from ambiguity attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3630–3639, 2021.
  24. Fingerprinting deep neural networks globally via universal adversarial perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13430–13439, 2022.
  25. Fingerprinting deep image restoration models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13285–13295, 2023.
  26. New collision attacks against up to 24-step sha-2. In Progress in Cryptology-INDOCRYPT 2008: 9th International Conference on Cryptology in India, Kharagpur, India, December 14-17, 2008. Proceedings 9, pages 91–103. Springer, 2008.
  27. On the robustness of backdoor-based watermarking in deep neural networks. In Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security, pages 177–188, 2021.
  28. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on international conference on multimedia retrieval, pages 269–277, 2017.
  29. A buyer-traceable dnn model IP protection method against piracy and misappropriation. In 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), pages 308–311. IEEE, 2022.
  30. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
  31. Passport-aware normalization for deep model protection. Advances in Neural Information Processing Systems, 33:22619–22628, 2020.
Citations (2)

Summary

We haven't generated a summary for this paper yet.