Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling Adversarial Prompts from Safety Benchmarks: Report for the Adversarial Nibbler Challenge (2309.11575v1)

Published 20 Sep 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Text-conditioned image generation models have recently achieved astonishing image quality and alignment results. Consequently, they are employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the web, they also produce unsafe content. As a contribution to the Adversarial Nibbler challenge, we distill a large set of over 1,000 potential adversarial inputs from existing safety benchmarks. Our analysis of the gathered prompts and corresponding images demonstrates the fragility of input filters and provides further insights into systematic safety issues in current generative image models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Manuel Brack (25 papers)
  2. Patrick Schramowski (48 papers)
  3. Kristian Kersting (205 papers)
Citations (6)