Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Imperceptible Adversarial Examples for Fake Image Detection (2106.01615v1)

Published 3 Jun 2021 in cs.CV and cs.AI

Abstract: Fooling people with highly realistic fake images generated with Deepfake or GANs brings a great social disturbance to our society. Many methods have been proposed to detect fake images, but they are vulnerable to adversarial perturbations -- intentionally designed noises that can lead to the wrong prediction. Existing methods of attacking fake image detectors usually generate adversarial perturbations to perturb almost the entire image. This is redundant and increases the perceptibility of perturbations. In this paper, we propose a novel method to disrupt the fake image detection by determining key pixels to a fake image detector and attacking only the key pixels, which results in the $L_0$ and the $L_2$ norms of adversarial perturbations much less than those of existing works. Experiments on two public datasets with three fake image detectors indicate that our proposed method achieves state-of-the-art performance in both white-box and black-box attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Quanyu Liao (5 papers)
  2. Yuezun Li (37 papers)
  3. Xin Wang (1307 papers)
  4. Bin Kong (15 papers)
  5. Bin Zhu (218 papers)
  6. Siwei Lyu (125 papers)
  7. Youbing Yin (12 papers)
  8. Qi Song (73 papers)
  9. Xi Wu (100 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.