Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Image Hijacks: Adversarial Images can Control Generative Models at Runtime (2309.00236v4)

Published 1 Sep 2023 in cs.LG, cs.CL, and cs.CR

Abstract: Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-LLM (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. 'the Eiffel Tower is now located in Rome') using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour Matching to craft hijacks for four types of attack, forcing VLMs to generate outputs of the adversary's choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Luke Bailey (7 papers)
  2. Euan Ong (6 papers)
  3. Stuart Russell (98 papers)
  4. Scott Emmons (21 papers)
Citations (55)
X Twitter Logo Streamline Icon: https://streamlinehq.com