Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning (2005.10987v1)

Published 22 May 2020 in cs.CV

Abstract: The success of multimodal data fusion in deep learning appears to be attributed to the use of complementary in-formation between multiple input data. Compared to their predictive performance, relatively less attention has been devoted to the robustness of multimodal fusion models. In this paper, we investigated whether the current multimodal fusion model utilizes the complementary intelligence to defend against adversarial attacks. We applied gradient based white-box attacks such as FGSM and PGD on MFNet, which is a major multispectral (RGB, Thermal) fusion deep learning model for semantic segmentation. We verified that the multimodal fusion model optimized for better prediction is still vulnerable to adversarial attack, even if only one of the sensors is attacked. Thus, it is hard to say that existing multimodal data fusion models are fully utilizing complementary relationships between multiple modalities in terms of adversarial robustness. We believe that our observations open a new horizon for adversarial attack research on multimodal data fusion.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Youngjoon Yu (9 papers)
  2. Hong Joo Lee (9 papers)
  3. Byeong Cheon Kim (2 papers)
  4. Jung Uk Kim (15 papers)
  5. Yong Man Ro (91 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.