Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption (2011.12902v3)

Published 25 Nov 2020 in cs.CV, cs.AI, and cs.CR

Abstract: This work examines the vulnerability of multimodal (image + text) models to adversarial threats similar to those discussed in previous literature on unimodal (image- or text-only) models. We introduce realistic assumptions of partial model knowledge and access, and discuss how these assumptions differ from the standard "black-box"/"white-box" dichotomy common in current literature on adversarial attacks. Working under various levels of these "gray-box" assumptions, we develop new attack methodologies unique to multimodal classification and evaluate them on the Hateful Memes Challenge classification task. We find that attacking multiple modalities yields stronger attacks than unimodal attacks alone (inducing errors in up to 73% of cases), and that the unimodal image attacks on multimodal classifiers we explored were stronger than character-based text augmentation attacks (inducing errors on average in 45% and 30% of cases, respectively).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ivan Evtimov (24 papers)
  2. Russel Howes (1 paper)
  3. Brian Dolhansky (8 papers)
  4. Hamed Firooz (27 papers)
  5. Cristian Canton Ferrer (32 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.