Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Feature Desensitization (2006.04621v3)

Published 8 Jun 2020 in cs.LG and stat.ML

Abstract: Neural networks are known to be vulnerable to adversarial attacks -- slight but carefully constructed perturbations of the inputs which can drastically impair the network's performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. However, these models often remain vulnerable to new types of attacks not seen during training, and even to slightly stronger versions of previously seen attacks. In this work, we propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field. Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs. This is achieved through a game where we learn features that are both predictive and robust (insensitive to adversarial attacks), i.e. cannot be used to discriminate between natural and adversarial data. Empirical results on several benchmarks demonstrate the effectiveness of the proposed approach against a wide range of attack types and attack strengths. Our code is available at https://github.com/BashivanLab/afd.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Pouya Bashivan (15 papers)
  2. Reza Bayat (5 papers)
  3. Adam Ibrahim (12 papers)
  4. Kartik Ahuja (43 papers)
  5. Mojtaba Faramarzi (6 papers)
  6. Touraj Laleh (2 papers)
  7. Blake Aaron Richards (4 papers)
  8. Irina Rish (85 papers)
Citations (19)
Github Logo Streamline Icon: https://streamlinehq.com