Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 31 tok/s
GPT-5 High 36 tok/s Pro
GPT-4o 95 tok/s
GPT OSS 120B 478 tok/s Pro
Kimi K2 223 tok/s Pro
2000 character limit reached

Robust LLM Fingerprinting via Domain-Specific Watermarks (2505.16723v1)

Published 22 May 2025 in cs.CR and cs.LG

Abstract: As open-source LLMs (OSMs) grow more capable and are widely shared and finetuned, ensuring model provenance, i.e., identifying the origin of a given model instance, has become an increasingly important issue. At the same time, existing backdoor-based model fingerprinting techniques often fall short of achieving key requirements of real-world model ownership detection. In this work, we build on the observation that while current open-source model watermarks fail to achieve reliable content traceability, they can be effectively adapted to address the challenge of model provenance. To this end, we introduce the concept of domain-specific watermarking for model fingerprinting. Rather than watermarking all generated content, we train the model to embed watermarks only within specified subdomains (e.g., particular languages or topics). This targeted approach ensures detection reliability, while improving watermark durability and quality under a range of real-world deployment settings. Our evaluations show that domain-specific watermarking enables model fingerprinting with strong statistical guarantees, controllable false positive rates, high detection power, and preserved generation quality. Moreover, we find that our fingerprints are inherently stealthy and naturally robust to real-world variability across deployment scenarios.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

Robust LLM Fingerprinting via Domain-Specific Watermarks

In the expansion of open-source LLMs (OSMs), ensuring the provenance of an AI model—identifying its origins—has taken on increased significance due to the widespread sharing and fine-tuning capabilities of these models. Recognizing the deficiencies in existing methods for tracing model ownership, particularly those based on backdoor-based fingerprinting, this paper introduces a nuanced approach via domain-specific watermarking to bolster the reliability of LLM fingerprinting. The researchers propose embedding watermarks only within select subdomains of generated content to maintain both detection reliability and model performance under diverse real-world application scenarios.

Main Contributions

The paper primarily contributes by reformulating traditional OSM watermarks for efficient model provenance. The key contributions include:

  1. Identification of Limitations in Existing Techniques: The authors discern how current OSM watermarks inadequately address model provenance challenges.
  2. Concept and Methodology for Domain-Specific Watermarks: They introduce domain-specific watermarking which compartmentalizes watermarking impacts to designated content subdomains.
  3. Quality Preservation: The authors demonstrate that domain-specific watermarks can achieve a reliable fingerprint without degrading the overall quality of model-generated text.
  4. Empirical Evaluations: Highlight the reliability, durability, and robustness of domain-specific watermarks in the face of finetuning and deployment variability.

Numerical Results

The empirical evaluations yield promising results, suggesting that domain-specific watermarks provide strong statistical guarantees, well-controlled Type-1 errors, and high detection power, without harming the generation quality of the models. Specifically, the paper shows that these watermarks can maintain high detection accuracy across diverse models and domains, with even minimal finetuning not significantly affecting watermark persistence. The fingerprinting accuracy reached near perfection with up to 100 queries under controlled domain settings, demonstrating the practicality of the watermark approach for real-world applications.

Implications and Future Directions

The implications of this research are twofold. Practically, domains involving large-scale OSM deployments could benefit from this approach to ensure content authenticity and adhere to licensing restrictions. Theoretically, the concept of domain-specific watermarking suggests that AI model ownership detection methods can affordably pivot towards more granular levels of specificity to optimize performance without significant quality trade-offs. This opens avenues for exploring multi-signature and compartmentalized watermarking approaches for diverse use cases within AI model tracing.

As the field progresses, future research may investigate integrating domain-specific watermarks with emerging techniques in adversarial robustness and stealthiness to further solidify their role in ensuring AI model integrity and provenance. The potential for applying these methods in multi-party scenarios (i.e., multiple stakeholders managing model signatures) can provide a broader framework for secure AI deployments.

In conclusion, the paper posits domain-specific watermarking as a practical enhancement to prior watermarking techniques, representing a vital step toward reliable model fingerprinting. The results indicate that focusing on specific content domains maintains watermark efficacy while safeguarding model quality, thus fulfilling many desiderata of effective open-source model fingerprinting.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.