Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain (2103.03000v2)

Published 4 Mar 2021 in cs.CV and cs.AI

Abstract: Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Paula Harder (14 papers)
  2. Franz-Josef Pfreundt (22 papers)
  3. Margret Keuper (77 papers)
  4. Janis Keuper (66 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.