Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarially Robust Training through Structured Gradient Regularization (1805.08736v1)

Published 22 May 2018 in stat.ML and cs.LG

Abstract: We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks vis-a-vis adversarial perturbations. Our regularizer can be derived as a controlled approximation from first principles, leveraging the fundamental link between training with noise and regularization. It adds very little computational overhead during learning and is simple to implement generically in standard deep learning frameworks. Our experiments provide strong evidence that structured gradient regularization can act as an effective first line of defense against attacks based on low-level signal corruption.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kevin Roth (12 papers)
  2. Sebastian Nowozin (45 papers)
  3. Thomas Hofmann (121 papers)
  4. Aurelien Lucchi (75 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.