Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Fine-tune with Dynamically Regulated Adversary (2204.13232v1)

Published 28 Apr 2022 in cs.LG, cs.AI, and cs.CV

Abstract: Adversarial training is an effective method to boost model robustness to malicious, adversarial attacks. However, such improvement in model robustness often leads to a significant sacrifice of standard performance on clean images. In many real-world applications such as health diagnosis and autonomous surgical robotics, the standard performance is more valued over model robustness against such extremely malicious attacks. This leads to the question: To what extent we can boost model robustness without sacrificing standard performance? This work tackles this problem and proposes a simple yet effective transfer learning-based adversarial training strategy that disentangles the negative effects of adversarial samples on model's standard performance. In addition, we introduce a training-friendly adversarial attack algorithm, which facilitates the boost of adversarial robustness without introducing significant training complexity. Extensive experimentation indicates that the proposed method outperforms previous adversarial training algorithms towards the target: to improve model robustness while preserving model's standard performance on clean data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pengyue Hou (3 papers)
  2. Ming Zhou (182 papers)
  3. Jie Han (93 papers)
  4. Petr Musilek (9 papers)
  5. Xingyu Li (104 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.