Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction (1911.11616v1)

Published 22 Nov 2019 in eess.IV, cs.CR, cs.CV, and cs.LG

Abstract: Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models. Although great efforts have been delved into the transferability across models, surprisingly, less attention has been paid to the cross-task transferability, which represents the real-world cybercriminal's situation, where an ensemble of different defense/detection mechanisms need to be evaded all at once. In this paper, we investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, object detection, semantic segmentation, explicit content detection, and text detection. Our proposed attack minimizes the ``dispersion'' of the internal feature map, which overcomes existing attacks' limitation of requiring task-specific loss functions and/or probing a target model. We conduct evaluation on open source detection and segmentation models as well as four different computer vision tasks provided by Google Cloud Vision (GCV) APIs, to show how our approach outperforms existing attacks by degrading performance of multiple CV tasks by a large margin with only modest perturbations linf=16.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yantao Lu (10 papers)
  2. Yunhan Jia (5 papers)
  3. Jianyu Wang (84 papers)
  4. Bai Li (33 papers)
  5. Weiheng Chai (3 papers)
  6. Lawrence Carin (203 papers)
  7. Senem Velipasalar (61 papers)
Citations (76)

Summary

We haven't generated a summary for this paper yet.