Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models (2302.04222v5)

Published 8 Feb 2023 in cs.CR

Abstract: Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the professional artist community. In particular, models can learn to mimic the artistic style of specific artists after "fine-tuning" on samples of their art. In this paper, we describe the design, implementation and evaluation of Glaze, a tool that enables artists to apply "style cloaks" to their art before sharing online. These cloaks apply barely perceptible perturbations to images, and when used as training data, mislead generative models that try to mimic a specific artist. In coordination with the professional artist community, we deploy user studies to more than 1000 artists, assessing their views of AI art, as well as the efficacy of our tool, its usability and tolerability of perturbations, and robustness across different scenarios and against adaptive countermeasures. Both surveyed artists and empirical CLIP-based scores show that even at low perturbation levels (p=0.05), Glaze is highly successful at disrupting mimicry under normal conditions (>92%) and against adaptive countermeasures (>85%).

Overview of "Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models"

The paper "Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models" presents a novel tool designed to address the proliferation of AI-driven art generators capable of mimicking the unique styles of human artists. The tool, named Glaze, is developed to add non-intrusive perturbations to artworks, effectively disrupting the abilities of generative models, such as Stable Diffusion and MidJourney, to learn and replicate an artist's style. This perturbation, referred to as a "style cloak," represents a significant step toward protecting intellectual property in the digital art space.

Context and Motivation

The emergence of text-to-image diffusion models has enabled users to generate high-fidelity images with minimal effort, leveraging models trained on vast repositories of unapproved, often copyrighted artwork. These models can be fine-tuned to impersonate specific artistic styles, threatening the livelihoods of artists who rely on their unique stylistic expressions. The rapid adoption and refinement of such AI tools have raised ethical and legal concerns regarding copyright infringement and artist autonomy, creating a pressing need for technical solutions that safeguard artistic styles against unauthorized encroachment.

Methodology

The Glaze system introduces minimal perturbations to an artist's original work before it is publicly shared. The perturbations are computed to shift the feature space representation of the artwork within the generative model, misleading attempts at style replication. This process involves:

  1. Target Style Selection: Choosing a style distinct from the artist's original style to guide the perturbation, utilizing a feature extractor to compute the optimal cloak.
  2. Style Transfer: Employing a style transfer model to assist in generating a feature representation that the perturbation aims to mimic.
  3. Cloak Optimization: Undertaking an optimization process constrained by visual perceptual limits (LPIPS metric) to ensure perturbations are imperceptible while effective.

The approach is carefully crafted to balance perturbation visibility with protection efficacy, ensuring that the artist's work retains its original aesthetic quality while becoming resistant to mimicry.

Evaluation and Results

The paper details extensive evaluation through user studies involving over 1,000 artists, analyzing both subjective perceptions of the tool and empirical measures using CLIP-based genre classification. The results demonstrate:

  • High protection success rates, with over 93% of artists affirming the effectiveness of the tool and indicating willingness to use it on their artwork.
  • Robustness against various adaptive strategies, including fine-tuning with uncloaked images and exploring alternative feature extractors, suggesting resilience of the cloaks in real-world scenarios.
  • Effective deterrence against commercial mimicry services, such as scenario.gg, which cannot replicate the protected styles effectively.

The paper also explores the limitations of Glaze, particularly its dependence on the presence of cloaked images in the training dataset of generative models and the potential for future countermeasures to emerge as AI models evolve.

Implications and Future Directions

The development and deployment of Glaze underscore the urgent need for protective measures in the face of advancing AI capabilities that challenge intellectual property norms. Practically, Glaze provides artists with a real-time, technical solution to mimicry threats, allowing them to continue sharing their work online without compromising their unique artistic styles. Theoretically, this contribution opens new avenues for research in adversarial perturbations tailored to generative models, highlighting the importance of adaptive and resilient defenses.

Future research may focus on refining perturbation techniques, expanding the system's flexibility against diverse AI models, and integrating broader datasets to enhance model-agnostic protection. Furthermore, proactive engagement with policymakers and industry stakeholders is essential to ensure legal frameworks evolve alongside technological advancements, safeguarding the interests of artists in the digital age.

In conclusion, the Glaze tool represents a significant advancement in the ongoing effort to protect artists from unauthorized style duplication, providing a feasible, technical barrier in an increasingly automated creative industry.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shawn Shan (15 papers)
  2. Jenna Cryan (4 papers)
  3. Emily Wenger (23 papers)
  4. Haitao Zheng (49 papers)
  5. Rana Hanocka (32 papers)
  6. Ben Y. Zhao (48 papers)
Citations (147)
Youtube Logo Streamline Icon: https://streamlinehq.com