Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
112 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Study of Aegis (2404.15784v1)

Published 24 Apr 2024 in cs.LG

Abstract: Bit flipping attacks are one class of attacks on neural networks with numerous defense mechanisms invented to mitigate its potency. Due to the importance of ensuring the robustness of these defense mechanisms, we perform an empirical study on the Aegis framework. We evaluate the baseline mechanisms of Aegis on low-entropy data (MNIST), and we evaluate a pre-trained model with the mechanisms fine-tuned on MNIST. We also compare the use of data augmentation to the robustness training of Aegis, and how Aegis performs under other adversarial attacks, such as the generation of adversarial examples. We find that both the dynamic-exit strategy and robustness training of Aegis has some drawbacks. In particular, we see drops in accuracy when testing on perturbed data, and on adversarial examples, as compared to baselines. Moreover, we found that the dynamic exit-strategy loses its uniformity when tested on simpler datasets. The code for this project is available on GitHub.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Aegis: Mitigating targeted bit-flip attacks against deep neural networks, 2023.
  2. Explaining and harnessing adversarial examples, 2015.
  3. DeepHammer: Depleting the intelligence of deep neural networks through targeted chain of bit flips. In 29th USENIX Security Symposium (USENIX Security 20), pages 1463–1480. USENIX Association, August 2020.
  4. Defending and harnessing the bit-flip based adversarial weight attack. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14083–14091, 2020.
  5. Ra-bnn: Constructing robust & accurate binary neural network to simultaneously defend adversarial bit-flip attack and improve accuracy, 2021.
  6. Modelshield: A generic and portable framework extension for defending bit-flip based adversarial weight attacks. In 2021 IEEE 39th International Conference on Computer Design (ICCD), pages 559–562, 2021.
  7. Deep residual learning for image recognition, 2015.
  8. Very deep convolutional networks for large-scale image recognition, 2015.
  9. Alex Krizhevsky. Learning multiple layers of features from tiny images. University of Toronto, 2012.
  10. Towards deep learning models resistant to adversarial attacks, 2019.
  11. Proflip: Targeted trojan attack with progressive bit flips. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7698–7707, 2021.
  12. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pages 372–387, 2016.

Summary

We haven't generated a summary for this paper yet.