Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pruning Adversarially Robust Neural Networks without Adversarial Examples (2210.04311v1)

Published 9 Oct 2022 in cs.LG, cs.AI, cs.CR, and cs.CV

Abstract: Adversarial pruning compresses models while preserving robustness. Current methods require access to adversarial examples during pruning. This significantly hampers training efficiency. Moreover, as new adversarial attacks and training methods develop at a rapid rate, adversarial pruning methods need to be modified accordingly to keep up. In this work, we propose a novel framework to prune a previously trained robust neural network while maintaining adversarial robustness, without further generating adversarial examples. We leverage concurrent self-distillation and pruning to preserve knowledge in the original model as well as regularizing the pruned model via the Hilbert-Schmidt Information Bottleneck. We comprehensively evaluate our proposed framework and show its superior performance in terms of both adversarial robustness and efficiency when pruning architectures trained on the MNIST, CIFAR-10, and CIFAR-100 datasets against five state-of-the-art attacks. Code is available at https://github.com/neu-spiral/PwoA/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tong Jian (5 papers)
  2. Zifeng Wang (78 papers)
  3. Yanzhi Wang (197 papers)
  4. Jennifer Dy (46 papers)
  5. Stratis Ioannidis (67 papers)
Citations (8)
Github Logo Streamline Icon: https://streamlinehq.com