Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts (2311.05608v2)

Published 9 Nov 2023 in cs.CR, cs.AI, and cs.CL

Abstract: Ensuring the safety of artificial intelligence-generated content (AIGC) is a longstanding topic in the AI community, and the safety concerns associated with LLMs have been widely investigated. Recently, large vision-LLMs (VLMs) represent an unprecedented revolution, as they are built upon LLMs but can incorporate additional modalities (e.g., images). However, the safety of VLMs lacks systematic evaluation, and there may be an overconfidence in the safety guarantees provided by their underlying LLMs. In this paper, to demonstrate that introducing additional modality modules leads to unforeseen AI safety issues, we propose FigStep, a straightforward yet effective jailbreaking algorithm against VLMs. Instead of feeding textual harmful instructions directly, FigStep converts the harmful content into images through typography to bypass the safety alignment within the textual module of the VLMs, inducing VLMs to output unsafe responses that violate common AI safety policies. In our evaluation, we manually review 46,500 model responses generated by 3 families of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total of 6 VLMs). The experimental results show that FigStep can achieve an average attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which already leverages an OCR detector to filter harmful queries. Above all, our work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights the necessity of novel safety alignments between visual and textual modalities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yichen Gong (7 papers)
  2. Delong Ran (3 papers)
  3. Jinyuan Liu (55 papers)
  4. Conglei Wang (3 papers)
  5. Tianshuo Cong (14 papers)
  6. Anyu Wang (10 papers)
  7. Sisi Duan (4 papers)
  8. Xiaoyun Wang (21 papers)
Citations (66)
X Twitter Logo Streamline Icon: https://streamlinehq.com