Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification (2308.02816v2)

Published 5 Aug 2023 in cs.MM and cs.CR

Abstract: LLMs have witnessed a meteoric rise in popularity among the general public users over the past few months, facilitating diverse downstream tasks with human-level accuracy and proficiency. Prompts play an essential role in this success, which efficiently adapt pre-trained LLMs to task-specific applications by simply prepending a sequence of tokens to the query texts. However, designing and selecting an optimal prompt can be both expensive and demanding, leading to the emergence of Prompt-as-a-Service providers who profit by providing well-designed prompts for authorized use. With the growing popularity of prompts and their indispensable role in LLM-based services, there is an urgent need to protect the copyright of prompts against unauthorized use. In this paper, we propose PromptCARE, the first framework for prompt copyright protection through watermark injection and verification. Prompt watermarking presents unique challenges that render existing watermarking techniques developed for model and dataset copyright verification ineffective. PromptCARE overcomes these hurdles by proposing watermark injection and verification schemes tailor-made for prompts and NLP characteristics. Extensive experiments on six well-known benchmark datasets, using three prevalent pre-trained LLMs (BERT, RoBERTa, and Facebook OPT-1.3b), demonstrate the effectiveness, harmlessness, robustness, and stealthiness of PromptCARE.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. J. D. M.-W. C. Kenton and L. K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of NAACL-HLT, 2019, pp. 4171–4186.
  2. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
  3. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  4. M. Deng, J. Wang, C.-P. Hsieh, Y. Wang, H. Guo, T. Shu, M. Song, E. P. Xing, and Z. Hu, “Rlprompt: Optimizing discrete text prompts with reinforcement learning,” arXiv preprint arXiv:2205.12548, 2022.
  5. T. Shin, Y. Razeghi, R. L. L. IV, E. Wallace, and S. Singh, “Autoprompt: Eliciting knowledge from language models with automatically generated prompts,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, B. Webber, T. Cohn, Y. He, and Y. Liu, Eds.   Association for Computational Linguistics, 2020, pp. 4222–4235.
  6. X. Shen, Y. Qu, M. Backes, and Y. Zhang, “Prompt stealing attacks against text-to-image generation models,” arXiv preprint arXiv:2302.09923, 2023.
  7. Y. Shen, X. He, Y. Han, and Y. Zhang, “Model stealing attacks against inductive graph neural networks,” in 2022 IEEE Symposium on Security and Privacy (SP).   IEEE, 2022, pp. 1175–1192.
  8. H. Yao, Z. Li, H. Weng, F. Xue, K. Ren, and Z. Qin, “Fdinet: Protecting against dnn model extraction via feature distortion index,” arXiv preprint arXiv:2306.11338, 2023.
  9. Y. Chen, R. Guan, X. Gong, J. Dong, and M. Xue, “D-dae: Defense-penetrating model extraction attacks,” in 2023 IEEE Symposium on Security and Privacy (SP).   IEEE, 2023, pp. 382–399.
  10. X. Pan, M. Zhang, S. Ji, and M. Yang, “Privacy risks of general-purpose language models,” in 2020 IEEE Symposium on Security and Privacy (SP).   IEEE, 2020, pp. 1314–1331.
  11. X. Jin, X. Xiao, S. Jia, W. Gao, D. Gu, H. Zhang, S. Ma, Z. Qian, and J. Li, “Annotating, tracking, and protecting cryptographic secrets with cryptompk,” in 2022 IEEE Symposium on Security and Privacy (SP).   IEEE, 2022, pp. 650–665.
  12. Y. Li, L. Zhu, X. Jia, Y. Jiang, S.-T. Xia, and X. Cao, “Defending against model stealing via verifying embedded external features,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 2, 2022, pp. 1464–1472.
  13. J. Chen, J. Wang, T. Peng, Y. Sun, P. Cheng, S. Ji, X. Ma, B. Li, and D. Song, “Copy, right? a testing framework for copyright protection of deep learning models,” in 2022 IEEE Symposium on Security and Privacy (SP).   IEEE, 2022, pp. 824–841.
  14. G. Liu, T. Xu, X. Ma, and C. Wang, “Your model trains on my data? protecting intellectual property of training data via membership fingerprint authentication,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 1024–1037, 2022.
  15. P. Maini, M. Yaghini, and N. Papernot, “Dataset inference: Ownership resolution in machine learning,” in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.   OpenReview.net, 2021.
  16. A. Dziedzic, H. Duan, M. A. Kaleem, N. Dhawan, J. Guan, Y. Cattan, F. Boenisch, and N. Papernot, “Dataset inference for self-supervised models,” Advances in Neural Information Processing Systems, vol. 35, pp. 12 058–12 070, 2022.
  17. C. Gu, C. Huang, X. Zheng, K.-W. Chang, and C.-J. Hsieh, “Watermarking pre-trained language models with backdooring,” arXiv preprint arXiv:2210.07543, 2022.
  18. X. Yang, K. Chen, W. Zhang, C. Liu, Y. Qi, J. Zhang, H. Fang, and N. Yu, “Watermarking text generated by black-box language models,” arXiv preprint arXiv:2305.08883, 2023.
  19. P. Li, P. Cheng, F. Li, W. Du, H. Zhao, and G. Liu, “Plmmark: A secure and robust black-box watermarking framework for pre-trained language models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 12, 2023, pp. 14 991–14 999.
  20. X. Zhao, Y.-X. Wang, and L. Li, “Protecting language generation models via invisible watermarking,” arXiv preprint arXiv:2302.03162, 2023.
  21. J. Speith, F. Schweins, M. Ender, M. Fyrbiak, A. May, and C. Paar, “How not to protect your ip–an industry-wide break of ieee 1735 implementations,” in 2022 IEEE Symposium on Security and Privacy (SP).   IEEE, 2022, pp. 1656–1671.
  22. J. Guo, Y. Li, L. Wang, S.-T. Xia, H. Huang, C. Liu, and B. Li, “Domain watermark: Effective and harmless dataset copyright protection is closed at hand.”
  23. Y. Li, M. Zhu, X. Yang, Y. Jiang, T. Wei, and S.-T. Xia, “Black-box dataset ownership verification via backdoor watermarking,” IEEE Transactions on Information Forensics and Security, 2023.
  24. Y. Li, Y. Bai, Y. Jiang, Y. Yang, S.-T. Xia, and B. Li, “Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection,” in NeurIPS, 2022.
  25. E. Ben-David, N. Oved, and R. Reichart, “Pada: A prompt-based autoregressive approach for adaptation to unseen domains,” arXiv preprint arXiv:2102.12206, 2021.
  26. X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, and J. Tang, “Gpt understands, too,” arXiv preprint arXiv:2103.10385, 2021.
  27. X. Liu, K. Ji, Y. Fu, Z. Du, Z. Yang, and J. Tang, “P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks,” arXiv preprint arXiv:2110.07602, 2021.
  28. G. Qin and J. Eisner, “Learning how to ask: Querying lms with mixtures of soft prompts,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tür, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou, Eds.   Association for Computational Linguistics, 2021, pp. 5203–5212. [Online]. Available: https://doi.org/10.18653/v1/2021.naacl-main.410
  29. X. L. Li and P. Liang, “Prefix-tuning: Optimizing continuous prompts for generation,” arXiv preprint arXiv:2101.00190, 2021.
  30. B. Lester, R. Al-Rfou, and N. Constant, “The power of scale for parameter-efficient prompt tuning,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, M. Moens, X. Huang, L. Specia, and S. W. Yih, Eds.   Association for Computational Linguistics, 2021, pp. 3045–3059. [Online]. Available: https://doi.org/10.18653/v1/2021.emnlp-main.243
  31. L. Dai, J. Mao, X. Fan, and X. Zhou, “Deephider: A multi-module and invisibility watermarking scheme for language model,” arXiv preprint arXiv:2208.04676, 2022.
  32. J. Kirchenbauer, J. Geiping, Y. Wen, J. Katz, I. Miers, and T. Goldstein, “A watermark for large language models,” arXiv preprint arXiv:2301.10226, 2023.
  33. N. Lukas, E. Jiang, X. Li, and F. Kerschbaum, “Sok: How robust is image classification deep neural network watermarking?” in 2022 IEEE Symposium on Security and Privacy (SP).   IEEE, 2022, pp. 787–804.
  34. J. Kirchenbauer, J. Geiping, Y. Wen, M. Shu, K. Saifullah, K. Kong, K. Fernando, A. Saha, M. Goldblum, and T. Goldstein, “On the reliability of watermarks for large language models,” arXiv preprint arXiv:2306.04634, 2023.
  35. J. Zamfirescu-Pereira, R. Y. Wong, B. Hartmann, and Q. Yang, “Why johnny can’t prompt: how non-ai experts try (and fail) to design llm prompts,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1–21.
  36. E. Wallace, S. Feng, N. Kandpal, M. Gardner, and S. Singh, “Universal adversarial triggers for attacking and analyzing NLP,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, K. Inui, J. Jiang, V. Ng, and X. Wan, Eds.   Association for Computational Linguistics, 2019, pp. 2153–2162.
  37. X. Zhang, H. Hong, Y. Hong, P. Huang, B. Wang, Z. Ba, and K. Ren, “Text-crs: A generalized certified robustness framework against textual adversarial attacks,” in 2024 IEEE Symposium on Security and Privacy (SP).   IEEE Computer Society, 2023, pp. 53–53.
  38. Y. Liu, G. Deng, Y. Li, K. Wang, T. Zhang, Y. Liu, H. Wang, Y. Zheng, and Y. Liu, “Prompt injection attack against llm-integrated applications,” arXiv preprint arXiv:2306.05499, 2023.
  39. J. Ebrahimi, A. Rao, D. Lowd, and D. Dou, “Hotflip: White-box adversarial examples for text classification,” arXiv preprint arXiv:1712.06751, 2017.
  40. R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in Proceedings of the 2013 conference on empirical methods in natural language processing, 2013, pp. 1631–1642.
  41. X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional networks for text classification,” Advances in neural information processing systems, vol. 28, 2015.
  42. L. Sharma, L. Graesser, N. Nangia, and U. Evci, “Natural language understanding with the quora question pairs dataset,” arXiv preprint arXiv:1907.01041, 2019.
  43. P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “Squad: 100, 000+ questions for machine comprehension of text,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, J. Su, X. Carreras, and K. Duh, Eds.   The Association for Computational Linguistics, 2016, pp. 2383–2392.
  44. A. Williams, N. Nangia, and S. R. Bowman, “A broad-coverage challenge corpus for sentence understanding through inference,” arXiv preprint arXiv:1704.05426, 2017.
  45. A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman, “GLUE: A multi-task benchmark and analysis platform for natural language understanding,” in Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2018, Brussels, Belgium, November 1, 2018, T. Linzen, G. Chrupala, and A. Alishahi, Eds.   Association for Computational Linguistics, 2018, pp. 353–355.
  46. J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds.   Association for Computational Linguistics, 2019, pp. 4171–4186.
  47. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
  48. S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin et al., “Opt: Open pre-trained transformer language models,” arXiv preprint arXiv:2205.01068, 2022.
  49. F. Petroni, T. Rocktäschel, S. Riedel, P. S. H. Lewis, A. Bakhtin, Y. Wu, and A. H. Miller, “Language models as knowledge bases?” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, K. Inui, J. Jiang, V. Ng, and X. Wan, Eds.   Association for Computational Linguistics, 2019, pp. 2463–2473.
  50. J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol. 35, pp. 24 824–24 837, 2022.
  51. X. Wang, J. Wei, D. Schuurmans, Q. V. Le, E. H. Chi, S. Narang, A. Chowdhery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,” in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.
  52. A. K. Lampinen, I. Dasgupta, S. C. Y. Chan, K. W. Mathewson, M. Tessler, A. Creswell, J. L. McClelland, J. Wang, and F. Hill, “Can language models learn from explanations in context?” in Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, Y. Goldberg, Z. Kozareva, and Y. Zhang, Eds., 2022, pp. 537–563.
  53. Y. Chen, Y. Liu, L. Dong, S. Wang, C. Zhu, M. Zeng, and Y. Zhang, “Adaprompt: Adaptive model training for prompt-based nlp,” arXiv preprint arXiv:2202.04824, 2022.
  54. Z. Zhong, D. Friedman, and D. Chen, “Factual probing is [MASK]: learning vs. learning to recall,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tür, I. Beltagy, S. Bethard, R. Cotterell, T. Chakraborty, and Y. Zhou, Eds.   Association for Computational Linguistics, 2021, pp. 5017–5033.
  55. S. Abdelnabi and M. Fritz, “Adversarial watermarking transformer: Towards tracing text provenance with data hiding,” in 2021 IEEE Symposium on Security and Privacy (SP).   IEEE, 2021, pp. 121–140.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hongwei Yao (10 papers)
  2. Jian Lou (46 papers)
  3. Kui Ren (169 papers)
  4. Zhan Qin (54 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.