Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MirrorCheck: Efficient Adversarial Defense for Vision-Language Models (2406.09250v2)

Published 13 Jun 2024 in cs.CV, cs.AI, and cs.LG

Abstract: Vision-LLMs (VLMs) are becoming increasingly vulnerable to adversarial attacks as various novel attack strategies are being proposed against these models. While existing defenses excel in unimodal contexts, they currently fall short in safeguarding VLMs against adversarial threats. To mitigate this vulnerability, we propose a novel, yet elegantly simple approach for detecting adversarial samples in VLMs. Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs. Subsequently, we calculate the similarities of the embeddings of both input and generated images in the feature space to identify adversarial samples. Empirical evaluations conducted on different datasets validate the efficacy of our approach, outperforming baseline methods adapted from image classification domains. Furthermore, we extend our methodology to classification tasks, showcasing its adaptability and model-agnostic nature. Theoretical analyses and empirical findings also show the resilience of our approach against adaptive attacks, positioning it as an excellent defense mechanism for real-world deployment against adversarial threats.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Samar Fares (2 papers)
  2. Klea Ziu (4 papers)
  3. Toluwani Aremu (8 papers)
  4. Nikita Durasov (13 papers)
  5. Martin Takáč (145 papers)
  6. Pascal Fua (176 papers)
  7. Karthik Nandakumar (57 papers)
  8. Ivan Laptev (99 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com