Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Memory Triggers: Unveiling Memorization in Text-To-Image Generative Models through Word-Level Duplication (2312.03692v1)

Published 6 Dec 2023 in cs.CR, cs.CV, and cs.LG

Abstract: Diffusion-based models, such as the Stable Diffusion model, have revolutionized text-to-image synthesis with their ability to produce high-quality, high-resolution images. These advancements have prompted significant progress in image generation and editing tasks. However, these models also raise concerns due to their tendency to memorize and potentially replicate exact training samples, posing privacy risks and enabling adversarial attacks. Duplication in training datasets is recognized as a major factor contributing to memorization, and various forms of memorization have been studied so far. This paper focuses on two distinct and underexplored types of duplication that lead to replication during inference in diffusion-based models, particularly in the Stable Diffusion model. We delve into these lesser-studied duplication phenomena and their implications through two case studies, aiming to contribute to the safer and more responsible use of generative models in various applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ali Naseh (41 papers)
  2. Jaechul Roh (11 papers)
  3. Amir Houmansadr (63 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.