Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image Generation (2403.12075v3)

Published 14 Feb 2024 in cs.CY, cs.AI, cs.CR, cs.CV, and cs.LG

Abstract: With the rise of text-to-image (T2I) generative AI models reaching wide audiences, it is critical to evaluate model robustness against non-obvious attacks to mitigate the generation of offensive images. By focusing on implicitly adversarial'' prompts (those that trigger T2I models to generate unsafe images for non-obvious reasons), we isolate a set of difficult safety issues that human creativity is well-suited to uncover. To this end, we built the Adversarial Nibbler Challenge, a red-teaming methodology for crowdsourcing a diverse set of implicitly adversarial prompts. We have assembled a suite of state-of-the-art T2I models, employed a simple user interface to identify and annotate harms, and engaged diverse populations to capture long-tail safety issues that may be overlooked in standard testing. The challenge is run in consecutive rounds to enable a sustained discovery and analysis of safety pitfalls in T2I models. In this paper, we present an in-depth account of our methodology, a systematic study of novel attack strategies and discussion of safety failures revealed by challenge participants. We also release a companion visualization tool for easy exploration and derivation of insights from the dataset. The first challenge round resulted in over 10k prompt-image pairs with machine annotations for safety. A subset of 1.5k samples contains rich human annotations of harm types and attack styles. We find that 14% of images that humans consider harmful are mislabeled assafe'' by machines. We have identified new attack strategies that highlight the complexity of ensuring T2I model robustness. Our findings emphasize the necessity of continual auditing and adaptation as new vulnerabilities emerge. We are confident that this work will enable proactive, iterative safety assessments and promote responsible development of T2I models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Jessica Quaye (6 papers)
  2. Alicia Parrish (31 papers)
  3. Oana Inel (13 papers)
  4. Charvi Rastogi (18 papers)
  5. Hannah Rose Kirk (33 papers)
  6. Minsuk Kahng (29 papers)
  7. Max Bartolo (29 papers)
  8. Jess Tsang (1 paper)
  9. Justin White (1 paper)
  10. Nathan Clement (7 papers)
  11. Rafael Mosquera (6 papers)
  12. Juan Ciro (9 papers)
  13. Vijay Janapa Reddi (78 papers)
  14. Lora Aroyo (35 papers)
  15. Erin Van Liemt (7 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com