Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient LLM-Jailbreaking by Introducing Visual Modality (2405.20015v1)

Published 30 May 2024 in cs.AI and cs.CL

Abstract: This paper focuses on jailbreaking attacks against LLMs, eliciting them to generate objectionable content in response to harmful user queries. Unlike previous LLM-jailbreaks that directly orient to LLMs, our approach begins by constructing a multimodal LLM (MLLM) through the incorporation of a visual module into the target LLM. Subsequently, we conduct an efficient MLLM-jailbreak to generate jailbreaking embeddings embJS. Finally, we convert the embJS into text space to facilitate the jailbreaking of the target LLM. Compared to direct LLM-jailbreaking, our approach is more efficient, as MLLMs are more vulnerable to jailbreaking than pure LLM. Additionally, to improve the attack success rate (ASR) of jailbreaking, we propose an image-text semantic matching scheme to identify a suitable initial input. Extensive experiments demonstrate that our approach surpasses current state-of-the-art methods in terms of both efficiency and effectiveness. Moreover, our approach exhibits superior cross-class jailbreaking capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhenxing Niu (21 papers)
  2. Yuyao Sun (8 papers)
  3. Haodong Ren (2 papers)
  4. Haoxuan Ji (2 papers)
  5. Quan Wang (130 papers)
  6. Xiaoke Ma (9 papers)
  7. Gang Hua (101 papers)
  8. Rong Jin (164 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com