Transferable Learned Image Compression-Resistant Adversarial Perturbations
Abstract: Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks. While existing adversarial perturbations are primarily applied to uncompressed images or compressed images by the traditional image compression method, i.e., JPEG, limited studies have investigated the robustness of models for image classification in the context of DNN-based image compression. With the rapid evolution of advanced image compression, DNN-based learned image compression has emerged as the promising approach for transmitting images in many security-critical applications, such as cloud-based face recognition and autonomous driving, due to its superior performance over traditional compression. Therefore, there is a pressing need to fully investigate the robustness of a classification system post-processed by learned image compression. To bridge this research gap, we explore the adversarial attack on a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules. Furthermore, to enhance the transferability of perturbations across various quality levels and architectures of learned image compression models, we introduce a saliency score-based sampling method to enable the fast generation of transferable perturbation. Extensive experiments with popular attack methods demonstrate the enhanced transferability of our proposed method when attacking images that have been post-processed with different learned image compression models.
- “Intriguing properties of neural networks,” in Proceedings of the International Conference on Learning Representations, 2014.
- “Efficient decision-based black-box adversarial attacks on face recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7714–7722.
- “Adversarial examples for semantic segmentation and object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1369–1378.
- “Black-box adversarial attacks with limited queries and information,” in International conference on machine learning. PMLR, 2018, pp. 2137–2146.
- “Jpeg-resistant adversarial images,” in NIPS 2017 Workshop on Machine Learning and Computer Security, 2017, vol. 1, p. 8.
- “Towards compression-resistant privacy-preserving photo sharing on social networks,” in Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, 2020, pp. 81–90.
- “Transferable adversarial perturbations,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 452–467.
- “Variational image compression with a scale hyperprior,” in International Conference on Learning Representations, 2018.
- “Learned image compression with discretized gaussian mixture likelihoods and attention modules,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7939–7948.
- “Corner-to-center long-range context model for efficient learned image compression,” Journal of Visual Communication and Image Representation, vol. 98, pp. 103990, 2024.
- “Overview of the versatile video coding (vvc) standard and its applications,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 10, pp. 3736–3764, 2021.
- Gregory K Wallace, “The jpeg still picture compression standard,” Communications of the ACM, vol. 34, no. 4, pp. 30–44, 1991.
- “Jpeg2000: Image compression fundamentals,” Standards and Practice, vol. 11, no. 2, 2002.
- Fabrice Bellard, “Bpg image format (2014),” Volume, vol. 1, pp. 2, 2016.
- “Reconstruction distortion of learned image compression with imperceptible perturbations,” in ICML 2023 Workshop Neural Compression: From Information Theory to Applications, 2023.
- “Towards robust neural image compression: Adversarial attack and model finetuning,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
- “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018.
- “Ensemble adversarial training: Attacks and defenses,” in International Conference on Learning Representations, 2018.
- “Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks,” in 28th USENIX security symposium (USENIX security 19), 2019, pp. 321–338.
- “Explaining and harnessing adversarial examples,” in International Conference on Learning Representations, 2015.
- “Compressai: a pytorch library and evaluation platform for end-to-end compression research,” arXiv preprint arXiv:2011.03029, 2020.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.