Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection (2301.10454v1)

Published 25 Jan 2023 in cs.LG and cs.CV

Abstract: Current machine learning models achieve super-human performance in many real-world applications. Still, they are susceptible against imperceptible adversarial perturbations. The most effective solution for this problem is adversarial training that trains the model with adversarially perturbed samples instead of original ones. Various methods have been developed over recent years to improve adversarial training such as data augmentation or modifying training attacks. In this work, we examine the same problem from a new data-centric perspective. For this purpose, we first demonstrate that the existing model-based methods can be equivalent to applying smaller perturbation or optimization weights to the hard training examples. By using this finding, we propose detecting and removing these hard samples directly from the training procedure rather than applying complicated algorithms to mitigate their effects. For detection, we use maximum softmax probability as an effective method in out-of-distribution detection since we can consider the hard samples as the out-of-distribution samples for the whole data distribution. Our results on SVHN and CIFAR-10 datasets show the effectiveness of this method in improving the adversarial training without adding too much computational cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mohammad Azizmalayeri (12 papers)
  2. Arman Zarei (8 papers)
  3. Alireza Isavand (1 paper)
  4. Mohammad Taghi Manzuri (5 papers)
  5. Mohammad Hossein Rohban (43 papers)

Summary

We haven't generated a summary for this paper yet.