Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks (2405.04403v1)

Published 7 May 2024 in cs.CV and cs.CL

Abstract: Augmenting LLMs with image-understanding capabilities has resulted in a boom of high-performing Vision-LLMs (VLMs). While studying the alignment of LLMs to human values has received widespread attention, the safety of VLMs has not received the same attention. In this paper, we explore the impact of jailbreaking on three state-of-the-art VLMs, each using a distinct modeling approach. By comparing each VLM to their respective LLM backbone, we find that each VLM is more susceptible to jailbreaking. We consider this as an undesirable outcome from visual instruction-tuning, which imposes a forgetting effect on an LLM's safety guardrails. Therefore, we provide recommendations for future work based on evaluation strategies that aim to highlight the weaknesses of a VLM, as well as take safety measures into account during visual instruction tuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Georgios Pantazopoulos (7 papers)
  2. Amit Parekh (5 papers)
  3. Malvina Nikandrou (8 papers)
  4. Alessandro Suglia (25 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com