Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrints (2307.03798v3)
Abstract: Models leveraging both visual and textual data such as Contrastive Language-Image Pre-training (CLIP), are the backbone of many recent advances in artificial intelligence. In this work, we show that despite their versatility, such models are vulnerable to what we refer to as fooling master images. Fooling master images are capable of maximizing the confidence score of a CLIP model for a significant number of widely varying prompts, while being either unrecognizable or unrelated to the attacked prompts for humans. The existence of such images is problematic as it could be used by bad actors to maliciously interfere with CLIP-trained image retrieval models in production with comparably small effort as a single image can attack many different prompts. We demonstrate how fooling master images for CLIP (CLIPMasterPrints) can be mined using stochastic gradient descent, projected gradient descent, or blackbox optimization. Contrary to many common adversarial attacks, the blackbox optimization approach allows us to mine CLIPMasterPrints even when the weights of the model are not accessible. We investigate the properties of the mined images, and find that images trained on a small number of image captions generalize to a much larger number of semantically related captions. We evaluate possible mitigation strategies, where we increase the robustness of the model and introduce an approach to automatically detect CLIPMasterPrints to sanitize the input of vulnerable models. Finally, we find that vulnerability to CLIPMasterPrints is related to a modality gap in contrastive pre-trained multi-modal networks. Code available at https://github.com/matfrei/CLIPMasterPrints.
- Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access, 9:155161–155196, 2021.
- Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 4845–4854, 2019.
- Deepmasterprints: Generating masterprints for dictionary attacks via latent variable evolution. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–9. IEEE, 2018.
- Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069, 2018.
- Discovering the hidden vocabulary of DALLE-2. In NeurIPS 2022 Workshop on Score-Based Methods, 2022.
- Familiarity-based open-set recognition under adversarial attacks. In Challenges for Out-of-Distribution Generalization in Computer Vision (OOD-CV), 2023.
- Multimodal neurons in artificial neural networks. Distill, 6(3):e30, 2021.
- Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
- Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014.
- Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
- Nikolaus Hansen. The CMA evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772, 2016.
- Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.
- Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448–456, 2015.
- Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
- Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
- Adversarial machine learning at scale. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
- Adversarial examples in the physical world. In Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC, 2018.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International Conference on Machine Learning, pp. 12888–12900. PMLR, 2022.
- Adversarial VQA: A new benchmark for evaluating the robustness of vqa models. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 2042–2051, 2021.
- Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. In Advances in Neural Information Processing Systems (NeurIPS), pp. 17612–17625, 2022.
- Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations (ICLR), 2018.
- Mixed precision training. In Proceedings of the International Conference on Learning Representations (ICLR), 2018.
- George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
- Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 427–436, 2015.
- Reading isn’t believing: Adversarial attacks on multi-modal neurons. arXiv preprint arXiv:2103.10480, 2021.
- Text-only training for image captioning using noise-injected CLIP. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 4055–4063, 2022.
- Mesut Ozdag. Adversarial attacks and defenses against deep neural networks: a survey. Procedia Computer Science, 140:152–161, 2018.
- Are multimodal models robust to image and text perturbations? arXiv preprint arXiv:2212.08044, 2022.
- Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning (ICML), pp. 8748–8763. PMLR, 2021.
- High-resolution image synthesis with latent diffusion models. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695, 2022.
- Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
- LAION-400M: Open dataset of CLIP-filtered 400 million image-text pairs. In Data Centric AI NeurIPS Workshop, 2021.
- Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of International Conference on Computer Vision (ICCV), pp. 618–626, 2017.
- Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
- Kenneth O Stanley. Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines, 8(2):131–162, 2007.
- Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pp. 221–228, 2018.
- Visualizing and understanding convolutional networks. In Proceedings of European Conference on Computer Vision (ECCV), pp. 818–833. Springer, 2014.
- Sigmoid loss for language image pre-training. In Proceedings of Computer Vision and Pattern Recognition (CVPR), 2023.
- Matthias Freiberger (9 papers)
- Peter Kun (9 papers)
- Christian Igel (47 papers)
- Anders Sundnes Løvlie (18 papers)
- Sebastian Risi (77 papers)