Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MAP: MAsk-Pruning for Source-Free Model Intellectual Property Protection (2403.04149v1)

Published 7 Mar 2024 in cs.CV

Abstract: Deep learning has achieved remarkable progress in various applications, heightening the importance of safeguarding the intellectual property (IP) of well-trained models. It entails not only authorizing usage but also ensuring the deployment of models in authorized data domains, i.e., making models exclusive to certain target domains. Previous methods necessitate concurrent access to source training data and target unauthorized data when performing IP protection, making them risky and inefficient for decentralized private data. In this paper, we target a practical setting where only a well-trained source model is available and investigate how we can realize IP protection. To achieve this, we propose a novel MAsk Pruning (MAP) framework. MAP stems from an intuitive hypothesis, i.e., there are target-related parameters in a well-trained model, locating and pruning them is the key to IP protection. Technically, MAP freezes the source model and learns a target-specific binary mask to prevent unauthorized data usage while minimizing performance degradation on authorized data. Moreover, we introduce a new metric aimed at achieving a better balance between source and target performance degradation. To verify the effectiveness and versatility, we have evaluated MAP in a variety of scenarios, including vanilla source-available, practical source-free, and challenging data-free. Extensive experiments indicate that MAP yields new state-of-the-art performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. Robust and resource-efficient data-free knowledge distillation by generative pseudo replay. In AAAI, 2022.
  2. Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1897–1914. IEEE, 2022.
  3. Cosine model watermarking against ensemble distillation. In AAAI, 2022.
  4. Data-free learning of student networks. In ICCV, 2019.
  5. Refit: a unified watermark removal framework for deep learning systems with limited data. In Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, pages 321–335, 2021.
  6. Domain adaptation in the absence of source domain data. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 451–460, 2016.
  7. François Chollet. Xception: Deep learning with depthwise separable convolutions. In CVPR, 2017.
  8. Transductive adaptation of black box predictions. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 326–331, 2016.
  9. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223. JMLR Workshop and Conference Proceedings, 2011.
  10. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  11. Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6):141–142, 2012.
  12. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  13. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018.
  14. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1322–1333, 2015.
  15. Domain-adversarial training of neural networks. JMLR, 2016.
  16. Are you stealing my model? sample correlation for fingerprinting deep neural networks. NeurIPS, 2022.
  17. Fine-tuning is not enough: A simple yet effective watermark removal attack for dnn models. arXiv preprint arXiv:2009.08697, 2020.
  18. Deep residual learning for image recognition. In CVPR, 2016.
  19. Structured pruning for deep convolutional neural networks: A survey. arXiv preprint arXiv:2303.00566, 2023.
  20. Jonathan J. Hull. A database for handwritten text recognition research. IEEE TPAMI, 16(5):550–554, 1994.
  21. Segment anything. In ICCV, 2023.
  22. Learning multiple layers of features from tiny images. 2009.
  23. Plmmark: a secure and robust black-box watermarking framework for pre-trained language models. In AAAI, 2023.
  24. Model adaptation: Unsupervised domain adaptation without source data. In CVPR, 2020.
  25. How to prove your model belongs to you: A blind-watermark based framework to protect intellectual property of dnn. In Proceedings of the 35th Annual Computer Security Applications Conference, pages 126–137, 2019.
  26. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML. PMLR, 2020.
  27. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE TPAMI, 44(11):8602–8617, 2021.
  28. A comprehensive survey on test-time adaptation under distribution shifts. arXiv preprint arXiv:2303.15361, 2023.
  29. Ttt++: When does self-supervised test-time training fail or thrive? NeurIPS, 2021a.
  30. Source-free domain adaptation for semantic segmentation. In CVPR, 2021b.
  31. Membership inference attacks by exploiting loss trajectory. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2085–2098, 2022.
  32. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021c.
  33. Data-free knowledge distillation for deep neural networks. arXiv preprint arXiv:1710.07535, 2017.
  34. Towards imperceptible and robust adversarial example attacks against neural networks. In AAAI, 2018.
  35. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In ECCV, 2018.
  36. Reading digits in natural images with unsupervised feature learning. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
  37. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.
  38. Fingerprinting deep neural networks globally via universal adversarial perturbations. In CVPR, 2022.
  39. Source-free domain adaptation via avatar prototype generation and adaptation. arXiv preprint arXiv:2106.15326, 2021.
  40. Bmd: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation. In ECCV, 2022.
  41. Upcycling models under domain and category shift. In CVPR, 2023.
  42. Lead: Learning decomposition for source-free universal domain adaptation. In CVPR, 2024.
  43. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  44. Hilbert space embeddings and metrics on probability measures. JMLR, 2010.
  45. Plug & play attacks: Towards robust and flexible model inversion attacks. arXiv preprint arXiv:2201.12179, 2022.
  46. Twan van Laarhoven and Elena Marchiori. Unsupervised domain adaptation with random walks on target labelings. arXiv preprint arXiv:1706.05335, 2017.
  47. Method for protection of deep learning models using digital watermarking. In 2022 VIII International Conference on Information Technology and Nanotechnology (ITNT), pages 1–5. IEEE, 2022.
  48. Domain specified optimization for deployment authorization. In ICCV, pages 5095–5105, 2023a.
  49. Non-transferable learning: A new approach for model ownership verification and applicability authorization. arXiv preprint arXiv:2106.06916, 2021a.
  50. Model barrier: A compact un-transferable isolation domain for model intellectual property protection. In CVPR, 2023b.
  51. Deep visual domain adaptation: A survey. Neurocomputing, 312:135–153, 2018.
  52. Learning to diversify for single domain generalization. In ICCV, 2021b.
  53. Interspace pruning: Using adaptive filter representations to improve training of sparse cnns. In CVPR, 2022.
  54. Object contour detection with a fully convolutional encoder-decoder network. In CVPR, 2016.
  55. Enhanced membership inference attacks against machine learning models. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 3093–3106, 2022.
  56. The current status and progress of adversarial examples attacks. In 2021 International Conference on Communications, Information System and Computer Engineering (CISCE), pages 707–711. IEEE, 2021.
  57. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 on Asia conference on computer and communications security, pages 159–172, 2018.
  58. Model inversion attacks against graph neural networks. IEEE TKDE, 2022.
  59. Neural architecture search with reinforcement learning. In ICLR, 2016.
Citations (1)

Summary

We haven't generated a summary for this paper yet.