Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness (1906.11235v1)

Published 26 Jun 2019 in cs.LG, cs.CV, and stat.ML

Abstract: This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations (spatial robustness). Evaluated on these adversarially transformed examples, we demonstrate that adding regularization on top of standard or adversarial training reduces the relative error by 20% for CIFAR10 without increasing the computational cost. This outperforms handcrafted networks that were explicitly designed to be spatial-equivariant. Furthermore, we observe for SVHN, known to have inherent variance in orientation, that robust training also improves standard accuracy on the test set. We prove that this no-trade-off phenomenon holds for adversarial examples from transformation groups in the infinite data limit.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Fanny Yang (38 papers)
  2. Zuowen Wang (9 papers)
  3. Christina Heinze-Deml (12 papers)
Citations (42)

Summary

We haven't generated a summary for this paper yet.