Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DeepHider: A Covert NLP Watermarking Framework Based on Multi-task Learning (2208.04676v3)

Published 9 Aug 2022 in cs.CR and cs.AI

Abstract: Natural language processing (NLP) technology has shown great commercial value in applications such as sentiment analysis. But NLP models are vulnerable to the threat of pirated redistribution, damaging the economic interests of model owners. Digital watermarking technology is an effective means to protect the intellectual property rights of NLP model. The existing NLP model protection mainly designs watermarking schemes by improving both security and robustness purposes, however, the security and robustness of these schemes have the following problems, respectively: (1) Watermarks are difficult to defend against fraudulent declaration by adversary and are easily detected and blocked from verification by human or anomaly detector during the verification process. (2) The watermarking model cannot meet multiple robustness requirements at the same time. To solve the above problems, this paper proposes a novel watermarking framework for NLP model based on the over-parameterization of depth model and the multi-task learning theory. Specifically, a covert trigger set is established to realize the perception-free verification of the watermarking model, and a novel auxiliary network is designed to improve the robustness and security of the watermarking model. The proposed framework was evaluated on two benchmark datasets and three mainstream NLP models, and the results show that the framework can successfully validate model ownership with 100% validation accuracy and advanced robustness and security without compromising the host model performance.

Citations (1)

Summary

We haven't generated a summary for this paper yet.