Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift (2406.18844v4)

Published 27 Jun 2024 in cs.CV

Abstract: Instruction tuning enhances large vision-LLMs (LVLMs) but increases their vulnerability to backdoor attacks due to their open design. Unlike prior studies in static settings, this paper explores backdoor attacks in LVLM instruction tuning across mismatched training and testing domains. We introduce a new evaluation dimension, backdoor domain generalization, to assess attack robustness under visual and text domain shifts. Our findings reveal two insights: (1) backdoor generalizability improves when distinctive trigger patterns are independent of specific data domains or model architectures, and (2) the competitive interaction between trigger patterns and clean semantic regions, where guiding the model to predict triggers enhances attack generalizability. Based on these insights, we propose a multimodal attribution backdoor attack (MABA) that injects domain-agnostic triggers into critical areas using attributional interpretation. Experiments with OpenFlamingo, Blip-2, and Otter show that MABA significantly boosts the attack success rate of generalization by 36.4%, achieving a 97% success rate at a 0.2% poisoning rate. This study reveals limitations in current evaluations and highlights how enhanced backdoor generalizability poses a security threat to LVLMs, even without test data access.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Siyuan Liang (73 papers)
  2. Jiawei Liang (8 papers)
  3. Tianyu Pang (96 papers)
  4. Chao Du (83 papers)
  5. Aishan Liu (72 papers)
  6. Xiaochun Cao (177 papers)
  7. Mingli Zhu (12 papers)
  8. Dacheng Tao (826 papers)
Citations (6)