Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Purification of Information Masking (2311.15339v1)

Published 26 Nov 2023 in cs.CV, cs.CR, cs.LG, and eess.IV

Abstract: Adversarial attacks meticulously generate minuscule, imperceptible perturbations to images to deceive neural networks. Counteracting these, adversarial purification methods seek to transform adversarial input samples into clean output images to defend against adversarial attacks. Nonetheless, extent generative models fail to effectively eliminate adversarial perturbations, yielding less-than-ideal purification results. We emphasize the potential threat of residual adversarial perturbations to target models, quantitatively establishing a relationship between perturbation scale and attack capability. Notably, the residual perturbations on the purified image primarily stem from the same-position patch and similar patches of the adversarial sample. We propose a novel adversarial purification approach named Information Mask Purification (IMPure), aims to extensively eliminate adversarial perturbations. To obtain an adversarial sample, we first mask part of the patches information, then reconstruct the patches to resist adversarial perturbations from the patches. We reconstruct all patches in parallel to obtain a cohesive image. Then, in order to protect the purified samples against potential similar regional perturbations, we simulate this risk by randomly mixing the purified samples with the input samples before inputting them into the feature extraction network. Finally, we establish a combined constraint of pixel loss and perceptual loss to augment the model's reconstruction adaptability. Extensive experiments on the ImageNet dataset with three classifier models demonstrate that our approach achieves state-of-the-art results against nine adversarial attack methods. Implementation code and pre-trained weights can be accessed at \textcolor{blue}{https://github.com/NoWindButRain/IMPure}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, in: Dy, J.G., Krause, A. (Eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, PMLR. pp. 274–283. URL: http://proceedings.mlr.press/v80/athalye18a.html.
  2. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, OpenReview.net. URL: https://openreview.net/forum?id=SyZI0GWCZ.
  3. Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (SP) , 39–57URL: https://api.semanticscholar.org/CorpusID:2893830.
  4. Simple baselines for image restoration, in: European Conference on Computer Vision. URL: https://api.semanticscholar.org/CorpusID:248085491.
  5. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security URL: https://api.semanticscholar.org/CorpusID:2179389.
  6. An empirical study of training self-supervised vision transformers, in: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, IEEE. pp. 9620–9629. URL: https://doi.org/10.1109/ICCV48922.2021.00950, doi:10.1109/ICCV48922.2021.00950.
  7. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks, in: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, PMLR. pp. 2206–2216. URL: http://proceedings.mlr.press/v119/croce20b.html.
  8. Imagenet: A large-scale hierarchical image database, in: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, IEEE Computer Society. pp. 248–255. URL: https://doi.org/10.1109/CVPR.2009.5206848, doi:10.1109/CVPR.2009.5206848.
  9. Stochastic activation pruning for robust adversarial defense, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, OpenReview.net. URL: https://openreview.net/forum?id=H1uR4GZRZ.
  10. Boosting adversarial attacks with momentum, in: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, IEEE Computer Society. pp. 9185–9193. URL: http://openaccess.thecvf.com/content_cvpr_2018/html/Dong_Boosting_Adversarial_Attacks_CVPR_2018_paper.html, doi:10.1109/CVPR.2018.00957.
  11. An image is worth 16x16 words: Transformers for image recognition at scale, in: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, OpenReview.net. URL: https://openreview.net/forum?id=YicbFdNTTy.
  12. A study of the effect of jpg compression on adversarial images. ArXiv preprint abs/1608.00853. URL: https://arxiv.org/abs/1608.00853.
  13. Explaining and harnessing adversarial examples, in: Bengio, Y., LeCun, Y. (Eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. URL: http://arxiv.org/abs/1412.6572.
  14. Towards deep neural network architectures robust to adversarial examples. CoRR abs/1412.5068. URL: https://api.semanticscholar.org/CorpusID:15538683.
  15. Countering adversarial images using input transformations, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, OpenReview.net. URL: https://openreview.net/forum?id=SyJ7ClWCb.
  16. Masked autoencoders are scalable vision learners. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 15979–15988URL: https://api.semanticscholar.org/CorpusID:243985980.
  17. Identity mappings in deep residual networks, in: European Conference on Computer Vision. URL: https://api.semanticscholar.org/CorpusID:6447277.
  18. When age-invariant face recognition meets face age synthesis: A multi-task learning framework. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 7278–7287URL: https://api.semanticscholar.org/CorpusID:232092666.
  19. Adversarial examples are not bugs, they are features, in: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (Eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 125–136. URL: https://proceedings.neurips.cc/paper/2019/hash/e2c420d928d4bf8ce0ff2ec19b371514-Abstract.html.
  20. Comdefend: An efficient image compression model to defend adversarial examples, in: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, Computer Vision Foundation / IEEE. pp. 6084–6092. URL: http://openaccess.thecvf.com/content_CVPR_2019/html/Jia_ComDefend_An_Efficient_Image_Compression_Model_to_Defend_Adversarial_Examples_CVPR_2019_paper.html, doi:10.1109/CVPR.2019.00624.
  21. Torchattacks: A pytorch repository for adversarial attacks. ArXiv preprint abs/2010.01950. URL: https://arxiv.org/abs/2010.01950.
  22. Adversarial examples in the physical world. ArXiv preprint abs/1607.02533. URL: https://arxiv.org/abs/1607.02533.
  23. Defense against adversarial attacks using high-level representation guided denoiser, in: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, IEEE Computer Society. pp. 1778–1787. URL: http://openaccess.thecvf.com/content_cvpr_2018/html/Liao_Defense_Against_Adversarial_CVPR_2018_paper.html, doi:10.1109/CVPR.2018.00191.
  24. Generic perceptual loss for modeling structured output dependencies. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 5420–5428URL: https://api.semanticscholar.org/CorpusID:232290506.
  25. Delving into transferable adversarial examples and black-box attacks, in: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net. URL: https://openreview.net/forum?id=Sys6GJqxl.
  26. Improving pixel-based mim by reducing wasted modeling capability. ArXiv preprint abs/2308.00261. URL: https://arxiv.org/abs/2308.00261.
  27. SGDR: stochastic gradient descent with warm restarts, in: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net. URL: https://openreview.net/forum?id=Skq89Scxx.
  28. Decoupled weight decay regularization, in: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, OpenReview.net. URL: https://openreview.net/forum?id=Bkg6RiCqY7.
  29. Towards deep learning models resistant to adversarial attacks, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, OpenReview.net. URL: https://openreview.net/forum?id=rJzIBfZAb.
  30. Magnet: A two-pronged defense against adversarial examples. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security URL: https://api.semanticscholar.org/CorpusID:3583538.
  31. Deepfool: A simple and accurate method to fool deep neural networks, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, IEEE Computer Society. pp. 2574–2582. URL: https://doi.org/10.1109/CVPR.2016.282, doi:10.1109/CVPR.2016.282.
  32. Image super-resolution as a defense against adversarial attacks. IEEE Transactions on Image Processing 29, 1711–1724. URL: https://api.semanticscholar.org/CorpusID:57573757.
  33. Diffusion models for adversarial purification, in: Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., Sabato, S. (Eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, PMLR. pp. 16805–16827. URL: https://proceedings.mlr.press/v162/nie22a.html.
  34. Technical report on the cleverhans v2.1.0 adversarial examples library. ArXiv preprint abs/1610.00768. URL: https://arxiv.org/abs/1610.00768.
  35. Practical black-box attacks against machine learning. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security URL: https://api.semanticscholar.org/CorpusID:1090603.
  36. Multi-modal fusion transformer for end-to-end autonomous driving. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 7073–7083URL: https://api.semanticscholar.org/CorpusID:233148602.
  37. Deflecting adversarial attacks with pixel deflection. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , 8571–8580URL: https://api.semanticscholar.org/CorpusID:4528012.
  38. Overfitting in adversarially robust deep learning, in: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, PMLR. pp. 8093–8104. URL: http://proceedings.mlr.press/v119/rice20a.html.
  39. Adversarially robust generalization requires more data, in: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (Eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 5019–5031. URL: https://proceedings.neurips.cc/paper/2018/hash/f708f064faaf32a43e4d3c784e6af9ea-Abstract.html.
  40. Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision 128, 336–359. URL: https://api.semanticscholar.org/CorpusID:15019293.
  41. Very deep convolutional networks for large-scale image recognition, in: Bengio, Y., LeCun, Y. (Eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. URL: http://arxiv.org/abs/1409.1556.
  42. Inception-v4, inception-resnet and the impact of residual connections on learning, in: Singh, S.P., Markovitch, S. (Eds.), Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, AAAI Press. pp. 4278–4284. URL: http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14806.
  43. Rethinking the inception architecture for computer vision, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, IEEE Computer Society. pp. 2818–2826. URL: https://doi.org/10.1109/CVPR.2016.308, doi:10.1109/CVPR.2016.308.
  44. Fooling automated surveillance cameras: Adversarial patches to attack person detection. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) , 49–55URL: https://api.semanticscholar.org/CorpusID:121124946.
  45. Robustness may be at odds with accuracy, in: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, OpenReview.net. URL: https://openreview.net/forum?id=SyxAb30cY7.
  46. Fast is better than free: Revisiting adversarial training, in: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, OpenReview.net. URL: https://openreview.net/forum?id=BJx040EFvH.
  47. Mitigating adversarial effects through randomization, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, OpenReview.net. URL: https://openreview.net/forum?id=Sk9yuql0Z.
  48. Improving transferability of adversarial examples with input diversity, in: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, Computer Vision Foundation / IEEE. pp. 2730–2739. URL: http://openaccess.thecvf.com/content_CVPR_2019/html/Xie_Improving_Transferability_of_Adversarial_Examples_With_Input_Diversity_CVPR_2019_paper.html, doi:10.1109/CVPR.2019.00284.
  49. Practical blind image denoising via swin-conv-unet and data synthesis. Machine Intelligence Research URL: https://api.semanticscholar.org/CorpusID:247748724.
  50. Defense against adversarial attacks by reconstructing images. IEEE Transactions on Image Processing 30, 6117–6129. URL: https://api.semanticscholar.org/CorpusID:235710687.
  51. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging 3, 47–57. URL: https://api.semanticscholar.org/CorpusID:5334482.
  52. Eliminating adversarial noise via information discard and robust representation restoration, in: International Conference on Machine Learning. URL: https://api.semanticscholar.org/CorpusID:260847065.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sitong Liu (14 papers)
  2. Zhichao Lian (11 papers)
  3. Shuangquan Zhang (18 papers)
  4. Liang Xiao (80 papers)

Summary

We haven't generated a summary for this paper yet.