Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Robustness vs Model Compression, or Both? (1903.12561v5)

Published 29 Mar 2019 in cs.CV, cs.CR, and cs.LG

Abstract: It is well known that deep neural networks (DNNs) are vulnerable to adversarial attacks, which are implemented by adding crafted perturbations onto benign examples. Min-max robust optimization based adversarial training can provide a notion of security against adversarial attacks. However, adversarial robustness requires a significantly larger capacity of the network than that for the natural training with only benign examples. This paper proposes a framework of concurrent adversarial training and weight pruning that enables model compression while still preserving the adversarial robustness and essentially tackles the dilemma of adversarial training. Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy. Code is available at https://github.com/yeshaokai/Robustness-Aware-Pruning-ADMM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shaokai Ye (20 papers)
  2. Kaidi Xu (85 papers)
  3. Sijia Liu (204 papers)
  4. Jan-Henrik Lambrechts (1 paper)
  5. Huan Zhang (171 papers)
  6. Aojun Zhou (45 papers)
  7. Kaisheng Ma (46 papers)
  8. Yanzhi Wang (197 papers)
  9. Xue Lin (92 papers)
Citations (156)