Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Key-Based Adversarial Defense for ImageNet by Using Pre-trained Model (2311.16577v1)

Published 28 Nov 2023 in cs.CV

Abstract: In this paper, we propose key-based defense model proliferation by leveraging pre-trained models and utilizing recent efficient fine-tuning techniques on ImageNet-1k classification. First, we stress that deploying key-based models on edge devices is feasible with the latest model deployment advancements, such as Apple CoreML, although the mainstream enterprise edge artificial intelligence (Edge AI) has been focused on the Cloud. Then, we point out that the previous key-based defense on on-device image classification is impractical for two reasons: (1) training many classifiers from scratch is not feasible, and (2) key-based defenses still need to be thoroughly tested on large datasets like ImageNet. To this end, we propose to leverage pre-trained models and utilize efficient fine-tuning techniques to proliferate key-based models even on limited computing resources. Experiments were carried out on the ImageNet-1k dataset using adaptive and non-adaptive attacks. The results show that our proposed fine-tuned key-based models achieve a superior classification accuracy (more than 10% increase) compared to the previous key-based models on classifying clean and adversarial examples.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (71)
  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
  2. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  3. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008.
  4. A. Graves, A.-r. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE international conference on acoustics, speech and signal processing, 2013, pp. 6645–6649.
  5. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in International Conference on Learning Representations, 2014.
  6. B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13.   Springer, 2013, pp. 387–402.
  7. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International Conference on Learning Representations, 2015.
  8. A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security.   Chapman and Hall/CRC, 2018, pp. 99–112.
  9. A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” in International conference on machine learning.   PMLR, 2018, pp. 284–293.
  10. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security, 2017, pp. 506–519.
  11. K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1625–1634.
  12. X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE transactions on neural networks and learning systems, vol. 30, no. 9, pp. 2805–2824, 2019.
  13. A. Athalye, N. Carlini, and D. Wagner, “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples,” in International conference on machine learning.   PMLR, 2018, pp. 274–283.
  14. F. Tramer, N. Carlini, W. Brendel, and A. Madry, “On adaptive attacks to adversarial example defenses,” Advances in neural information processing systems, vol. 33, pp. 1633–1645, 2020.
  15. O. Taran, S. Rezaeifar, and S. Voloshynovskiy, “Bridging machine learning and cryptography in defence against adversarial attacks,” in Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018.
  16. A. MaungMaung and H. Kiya, “Block-wise image transformation with secret key for adversarially robust defense,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 2709–2723, 2021.
  17. H. Kiya, A. MaungMaung, Y. Kinoshita, S. Imaizumi, and S. Shiota, “An overview of compressible and learnable image transformation with secret key and its applications,” APSIPA Transactions on Signal and Information Processing, vol. 11, no. 1, e11, 2022.
  18. A. MaungMaung and H. Kiya, “Encryption inspired adversarial defense for visual classification,” in 2020 IEEE International Conference on Image Processing (ICIP).   IEEE, 2020, pp. 1681–1685.
  19. R. Iijima, M. Tanaka, S. Shiota, and H. Kiya, “Enhanced security against adversarial examples using a random ensemble of encrypted vision transformer models,” arXiv preprint arXiv:2307.13985, 2023.
  20. A. Kerckhoffs, “La cryptographie militaire,” Journal des sciences militaires, pp. 5–38, 1883.
  21. S. Garg, S. Jha, S. Mahloujifar, and M. Mohammad, “Adversarially robust learning could leverage computational hardness.” in Algorithmic Learning Theory.   PMLR, 2020, pp. 364–385.
  22. A. A. Rusu, D. A. Calian, S. Gowal, and R. Hadsell, “Hindering adversarial attacks with implicit neural representations,” in International Conference on Machine Learning.   PMLR, 2022, pp. 18 910–18 934.
  23. O. Taran, S. Rezaeifar, T. Holotyak, and S. Voloshynovskiy, “Machine learning through cryptographic glasses: combating adversarial attacks by key-based diversified aggregation,” EURASIP journal on information security, vol. 2020, no. 1, pp. 1–18, 2020.
  24. A. MaungMaung and H. Kiya, “Ensemble of key-based models: Defense against black-box adversarial attacks,” in 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE).   IEEE, 2021, pp. 95–98.
  25. D. Su, H. Zhang, H. Chen, J. Yi, P.-Y. Chen, and Y. Gao, “Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 631–648.
  26. Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” in International Conference on Learning Representations, 2017.
  27. H. Shen, H. Chang, B. Dong, Y. Luo, and H. Meng, “Efficient llm inference on cpus,” arXiv preprint arXiv:2311.00502, 2023.
  28. A. Orhon, M. Siracusa, and A. Wadhwa, “Stable diffusion with core ml on apple silicon,” 2022. [Online]. Available: https://machinelearning.apple.com/research/stable-diffusion-coreml-apple-silicon
  29. I. Goodfellow, N. Papernot, S. Huang, Y. Duan, , and P. Abbeel, “Attacking machine learning with adversarial examples,” OpenAI Blog, 2017.
  30. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018.
  31. S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574–2582.
  32. P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, “Ead: elastic-net attacks to deep neural networks via adversarial examples,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  33. N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in 2016 IEEE European symposium on security and privacy (EuroS&P).   IEEE, 2016, pp. 372–387.
  34. T. B. Brown, N. Carlini, C. Zhang, C. Olsson, P. Christiano, and I. Goodfellow, “Unrestricted adversarial examples,” arXiv preprint arXiv:1809.08352, 2018.
  35. L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry, “Exploring the landscape of spatial robustness,” in International conference on machine learning.   PMLR, 2019, pp. 1802–1811.
  36. Y. Song, R. Shu, N. Kushman, and S. Ermon, “Constructing unrestricted adversarial examples with generative models,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  37. K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P.-Y. Chen, Y. Wang, and X. Lin, “Adversarial t-shirt! evading person detectors in a physical world,” in European Conference on Computer Vision.   Springer, 2020, pp. 665–681.
  38. S. Komkov and A. Petiushko, “Advhat: Real-world adversarial attack on arcface face id system,” in 2020 25th International Conference on Pattern Recognition (ICPR).   IEEE, 2021, pp. 819–826.
  39. M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 acm sigsac conference on computer and communications security, 2016, pp. 1528–1540.
  40. N. Guetta, A. Shabtai, I. Singh, S. Momiyama, and Y. Elovici, “Dodging attack using carefully crafted natural makeup,” arXiv preprint arXiv:2109.06467, 2021.
  41. T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,” arXiv preprint arXiv:1712.09665, 2017.
  42. A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein, “Adversarial training for free!” Advances in Neural Information Processing Systems, vol. 32, 2019.
  43. E. Wong, L. Rice, and J. Z. Kolter, “Fast is better than free: Revisiting adversarial training,” in International Conference on Learning Representations, 2020.
  44. P. de Jorge Aranda, A. Bibi, R. Volpi, A. Sanyal, P. Torr, G. Rogez, and P. Dokania, “Make some noise: Reliable and efficient single-step adversarial training,” Advances in Neural Information Processing Systems, vol. 35, pp. 12 881–12 893, 2022.
  45. Y. Sharma and P.-Y. Chen, “Attacking the madry defense model with l⁢_⁢1𝑙_1l\_1italic_l _ 1-based adversarial examples,” arXiv preprint arXiv:1710.10733, 2017.
  46. E. Wong and Z. Kolter, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” in International conference on machine learning.   PMLR, 2018, pp. 5286–5295.
  47. A. Raghunathan, J. Steinhardt, and P. Liang, “Certified defenses against adversarial examples,” in International Conference on Learning Representations, 2018.
  48. J. Cohen, E. Rosenfeld, and Z. Kolter, “Certified adversarial robustness via randomized smoothing,” in international conference on machine learning.   PMLR, 2019, pp. 1310–1320.
  49. M. Hein and M. Andriushchenko, “Formal guarantees on the robustness of a classifier against adversarial manipulation,” Advances in neural information processing systems, vol. 30, 2017.
  50. O. Poursaeed, I. Katsman, B. Gao, and S. Belongie, “Generative adversarial perturbations,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4422–4431.
  51. H.-T. D. Liu, M. Tao, C.-L. Li, D. Nowrouzezahrai, and A. Jacobson, “Beyond pixel norm-balls: Parametric adversaries using an analytically differentiable renderer,” in International Conference on Learning Representations, 2019.
  52. J. Buckman, A. Roy, C. Raffel, and I. Goodfellow, “Thermometer encoding: One hot way to resist adversarial examples,” in International conference on learning representations, 2018.
  53. C. Guo, M. Rana, M. Cisse, and L. van der Maaten, “Countering adversarial images using input transformations,” in International Conference on Learning Representations, 2018.
  54. C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, “Mitigating adversarial effects through randomization,” in International Conference on Learning Representations, 2018.
  55. F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, “Defense against adversarial attacks using high-level representation guided denoiser,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1778–1787.
  56. Z. Niu, Z. Chen, L. Li, Y. Yang, B. Li, and J. Yi, “On the limitations of denoising strategies as adversarial defenses,” arXiv preprint arXiv:2012.09384, 2020.
  57. Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman, “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples,” in International Conference on Learning Representations, 2018.
  58. J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” Advances in neural information processing systems, vol. 27, 2014.
  59. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  60. A. P. Steiner, A. Kolesnikov, X. Zhai, R. Wightman, J. Uszkoreit, and L. Beyer, “How to train your vit? data, augmentation, and regularization in vision transformers,” Transactions on Machine Learning Research, 2022.
  61. T. Ridnik, E. Ben-Baruch, A. Noy, and L. Zelnik-Manor, “Imagenet-21k pretraining for the masses,” arXiv preprint arXiv:2104.10972, 2021.
  62. R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., “On the opportunities and risks of foundation models,” arXiv preprint arXiv:2108.07258, 2021.
  63. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2021.
  64. E. J. Hu, yelong shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-rank adaptation of large language models,” in International Conference on Learning Representations, 2022.
  65. N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, A. Madry, and A. Kurakin, “On evaluating adversarial robustness,” arXiv preprint arXiv:1902.06705, 2019.
  66. F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in International conference on machine learning.   PMLR, 2020, pp. 2206–2216.
  67. A. MaungMaung, I. Echizen, and H. Kiya, “Hindering adversarial attacks with multiple encrypted patch embeddings,” in 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2023, pp. 1398–1404.
  68. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015.
  69. C. Liu, Y. Dong, W. Xiang, X. Yang, H. Su, J. Zhu, Y. Chen, Y. He, H. Xue, and S. Zheng, “A comprehensive study on robustness of image classification models: Benchmarking and rethinking,” arXiv preprint arXiv:2302.14301, 2023.
  70. N. D. Singh, F. Croce, and M. Hein, “Revisiting adversarial training for imagenet: Architectures, training and generalization across threat models,” arXiv preprint arXiv:2303.01870, 2023.
  71. F. Croce, M. Andriushchenko, V. Sehwag, E. Debenedetti, N. Flammarion, M. Chiang, P. Mittal, and M. Hein, “Robustbench: a standardized adversarial robustness benchmark,” arXiv preprint arXiv:2010.09670, 2020.

Summary

We haven't generated a summary for this paper yet.