Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models (2311.04378v5)

Published 7 Nov 2023 in cs.LG, cs.CL, and cs.CR

Abstract: Watermarking generative models consists of planting a statistical signal (watermark) in a model's output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used. Our attack is based on two assumptions: (1) The attacker has access to a "quality oracle" that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a "perturbation oracle" which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities. We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for LLMs: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.

Citations (42)

Summary

  • The paper establishes that under realistic adversary conditions, strong watermarking can be effectively removed with minimal loss of output quality.
  • It introduces a novel attack mechanism based on a random walk through high-quality outputs, utilizing both quality and perturbation oracles.
  • Empirical results on models like Llama2-7B validate the attack strategy, signaling a need for alternative AI content verification methods.

Evaluation of "Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models"

The paper "Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models" undertakes a rigorous paper into the feasibility of watermarking generative models, a challenge that has become increasingly pertinent given the rapid advancement and deployment of these models in various domains. The authors lay out well-founded arguments that, under certain natural assumptions, the task of achieving strong watermarking—where an attacker cannot remove a watermark without severely degrading content quality—is impossible.

Core Thesis and Methodology

The authors assert that, despite various proposed schemes, strong watermarking of generative models is infeasible when considering practical adversaries. They challenge existing watermarking techniques by introducing a novel attack mechanism, rooted in the concept of a random walk on high-quality output space. This mechanism is fueled by two main assumptions: first, that an attacker possesses a quality oracle to assess output quality relative to prompts; and second, that a perturbation oracle exists, capable of applying output-preserving alterations to outputs.

Assumptions and Attack Framework

The two key assumptions are pivotal:

  1. Quality Oracle: The paper argues that attackers can leverage existing generative models or weaker models trained for evaluation as quality oracles. The authors suggest that as models advance, assessing quality becomes easier—a claim supported by employing models such as GPT-3.5 and reward models for quality checks in their experiments.
  2. Perturbation Oracle: The adversary accesses a mechanism to perturb outputs—the paper suggests using masked LLMs to mask spans in generated text and fill them using sampling techniques. The perturbation oracle envisages maintaining the semantic integrity while incrementally altering outputs, circumventing detection capabilities.

The central theoretical contribution of the paper is that these oracles empower an adversary to conduct an effective random walk, eventually landing on non-watermarked outputs with probabilities favoring the adversary. This adversarial strategy is guided by identifying a high-quality subset of all potential outputs, leveraging the inherent connectivity postulated by such subsets within the latent space of model outputs.

Results and Practical Implications

Empirically, the authors validate their attack strategy by successfully removing watermarks from multiple prominent schemes for LLMs using Llama2-7B. They achieve sub-threshold watermark detection scores (z-scores), demonstrating that watermarking offers limited protection against the type of adversary considered. Importantly, output quality appears to suffer minimally, reinforcing the theoretical claim that achieving both security and minimal degradation in watermarking is currently unattainable.

The implications of their findings are manifold:

  • Technological: The paper highlights the inherent limitations in current watermarking techniques, suggesting the need for the field to pivot towards different methodologies for AI-generated content detection.
  • Policy and Ethical Considerations: Policymakers are tasked with framing realistic expectations about watermarking’s ability to safeguard against malicious AI-generated content such as misinformation or content misattribution, urging a diversified strategy.
  • Future Research Pathways: While the paper offers compelling arguments against strong watermarking, it might serve as a springboard for exploring alternative content verification mechanisms or for fortifying the robustness of weak watermarking schemes within bounded threat models.

In conclusion, "Watermarks in the Sand" serves as a cautionary exposition on the limitations inherent in watermarking generative models against well-prepared adversaries. As AI models burgeon in capability and application scope, fostering dialogue on alternative verification and attribution mechanisms becomes increasingly essential to address the challenges identified in this paper.