Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are You Copying My Prompt? Protecting the Copyright of Vision Prompt for VPaaS via Watermark (2405.15161v1)

Published 24 May 2024 in cs.CR and cs.CV

Abstract: Visual Prompt Learning (VPL) differs from traditional fine-tuning methods in reducing significant resource consumption by avoiding updating pre-trained model parameters. Instead, it focuses on learning an input perturbation, a visual prompt, added to downstream task data for making predictions. Since learning generalizable prompts requires expert design and creation, which is technically demanding and time-consuming in the optimization process, developers of Visual Prompts as a Service (VPaaS) have emerged. These developers profit by providing well-crafted prompts to authorized customers. However, a significant drawback is that prompts can be easily copied and redistributed, threatening the intellectual property of VPaaS developers. Hence, there is an urgent need for technology to protect the rights of VPaaS developers. To this end, we present a method named \textbf{WVPrompt} that employs visual prompt watermarking in a black-box way. WVPrompt consists of two parts: prompt watermarking and prompt verification. Specifically, it utilizes a poison-only backdoor attack method to embed a watermark into the prompt and then employs a hypothesis-testing approach for remote verification of prompt ownership. Extensive experiments have been conducted on three well-known benchmark datasets using three popular pre-trained models: RN50, BIT-M, and Instagram. The experimental results demonstrate that WVPrompt is efficient, harmless, and robust to various adversarial operations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43–76, 2020.
  2. Exploring visual prompts for adapting large-scale models. arXiv preprint arXiv:2203.17274, 2022.
  3. Adversarial reprogramming of neural networks. arXiv preprint arXiv:1806.11146, 2018.
  4. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021.
  5. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.
  6. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980, 2020.
  7. Diversity-aware meta visual prompting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10878–10887, 2023.
  8. Visual prompting via image inpainting. Advances in Neural Information Processing Systems, 35:25005–25017, 2022.
  9. Understanding and improving visual prompting: A label-mapping perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19133–19143, 2023.
  10. Prompt stealing attacks against text-to-image generation models. arXiv preprint arXiv:2302.09923, 2023.
  11. On the feasibility of specialized ability stealing for large language code models. arXiv preprint arXiv:2303.03012, 2023.
  12. Quantifying privacy risks of prompts in visual prompt learning.
  13. Ipguard: Protecting intellectual property of deep neural networks via fingerprinting the classification boundary. In Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, pages 14–25, 2021.
  14. Metav: A meta-verifier approach to task-agnostic model fingerprinting. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1327–1336, 2022.
  15. Copy, right? a testing framework for copyright protection of deep learning models. In 2022 IEEE Symposium on Security and Privacy (SP), pages 824–841. IEEE, 2022.
  16. Metafinger: Fingerprinting the deep neural networks with meta-training. In 31st International Joint Conference on Artificial Intelligence (IJCAI-22), 2022.
  17. Are you stealing my model? sample correlation for fingerprinting deep neural networks. Advances in Neural Information Processing Systems, 35:36571–36584, 2022.
  18. Promptcare: Prompt copyright protection by watermark injection and verification. arXiv preprint arXiv:2308.02816, 2023.
  19. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th USENIX Security Symposium (USENIX Security 18), pages 1615–1631, 2018.
  20. Sok: How robust is image classification deep neural network watermarking? In 2022 IEEE Symposium on Security and Privacy (SP), pages 787–804. IEEE, 2022.
  21. Black-box dataset ownership verification via backdoor watermarking. IEEE Transactions on Information Forensics and Security, 2023.
  22. Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection. Advances in Neural Information Processing Systems, 35:13238–13250, 2022.
  23. On the robustness of backdoor-based watermarking in deep neural networks. In Proceedings of the 2021 ACM workshop on information hiding and multimedia security, pages 177–188, 2021.
  24. Dataset inference: Ownership resolution in machine learning. arXiv preprint arXiv:2104.10706, 2021.
  25. Dataset inference for self-supervised models. Advances in Neural Information Processing Systems, 35:12058–12070, 2022.
  26. A watermark for large language models. In International Conference on Machine Learning, pages 17061–17084. PMLR, 2023.
  27. Towards codable text watermarking for large language models. arXiv preprint arXiv:2307.15992, 2023.
  28. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019.
  29. Invisible backdoor attack with sample-specific triggers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 16463–16472, 2021.
  30. Wanet–imperceptible warping-based backdoor attack. arXiv preprint arXiv:2102.10369, 2021.
  31. Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources. In International Conference on Machine Learning, pages 9614–9624. PMLR, 2020.
  32. Adversarial reprogramming of pretrained neural networks for fraud detection. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2935–2939, 2021.
  33. Introduction to mathematical statistics. Pearson Education India, 2013.
  34. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  35. Big transfer (bit): General visual representation learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pages 491–507. Springer, 2020.
  36. Exploring the limits of weakly supervised pretraining. In Proceedings of the European conference on computer vision (ECCV), pages 181–196, 2018.
  37. Adversarial reprogramming of text classification neural networks. arXiv preprint arXiv:1809.01829, 2018.
  38. Cross-modal adversarial reprogramming. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2427–2435, 2022.
  39. Why adversarial reprogramming works, when it fails, and how to tell the difference. Information Sciences, 632:130–143, 2023.
  40. Fairness reprogramming. Advances in Neural Information Processing Systems, 35:34347–34362, 2022.
  41. Visual prompt tuning for generative transfer learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19840–19851, 2023.
  42. Visual prompt tuning for test-time domain adaptation. arXiv preprint arXiv:2210.04831, 2022.
  43. Approximated prompt tuning for vision-language pre-trained models. arXiv preprint arXiv:2306.15706, 2023.
  44. Visual prompt tuning. In European Conference on Computer Vision, pages 709–727. Springer, 2022.
  45. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  46. Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19113–19122, 2023.
  47. A study of low-resource speech commands recognition based on adversarial reprogramming. arXiv preprint arXiv:2110.03894, 2(3), 2021.
  48. A survey of digital watermarking techniques and its applications. arXiv preprint arXiv:1407.4735, 2014.
  49. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 on Asia conference on computer and communications security, pages 159–172, 2018.
  50. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on international conference on multimedia retrieval, pages 269–277, 2017.
  51. A survey of deep neural network watermarking techniques. Neurocomputing, 461:171–193, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Huali Ren (2 papers)
  2. Anli Yan (3 papers)
  3. Chong-zhi Gao (2 papers)
  4. Hongyang Yan (3 papers)
  5. Zhenxin Zhang (3 papers)
  6. Jin Li (366 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com