Watermarking LLMs with Weight Quantization (2310.11237v1)
Abstract: Abuse of LLMs reveals high risks as LLMs are being deployed at an astonishing speed. It is important to protect the model weights to avoid malicious usage that violates licenses of open-source LLMs. This paper proposes a novel watermarking strategy that plants watermarks in the quantization process of LLMs without pre-defined triggers during inference. The watermark works when the model is used in the fp32 mode and remains hidden when the model is quantized to int8, in this way, the users can only inference the model without further supervised fine-tuning of the model. We successfully plant the watermark into open-source LLM weights including GPT-Neo and LLaMA. We hope our proposed method can provide a potential direction for protecting model weights in the era of LLM applications.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.