Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrints (2307.03798v3)

Published 7 Jul 2023 in cs.CV, cs.AI, cs.LG, and cs.NE

Abstract: Models leveraging both visual and textual data such as Contrastive Language-Image Pre-training (CLIP), are the backbone of many recent advances in artificial intelligence. In this work, we show that despite their versatility, such models are vulnerable to what we refer to as fooling master images. Fooling master images are capable of maximizing the confidence score of a CLIP model for a significant number of widely varying prompts, while being either unrecognizable or unrelated to the attacked prompts for humans. The existence of such images is problematic as it could be used by bad actors to maliciously interfere with CLIP-trained image retrieval models in production with comparably small effort as a single image can attack many different prompts. We demonstrate how fooling master images for CLIP (CLIPMasterPrints) can be mined using stochastic gradient descent, projected gradient descent, or blackbox optimization. Contrary to many common adversarial attacks, the blackbox optimization approach allows us to mine CLIPMasterPrints even when the weights of the model are not accessible. We investigate the properties of the mined images, and find that images trained on a small number of image captions generalize to a much larger number of semantically related captions. We evaluate possible mitigation strategies, where we increase the robustness of the model and introduce an approach to automatically detect CLIPMasterPrints to sanitize the input of vulnerable models. Finally, we find that vulnerability to CLIPMasterPrints is related to a modality gap in contrastive pre-trained multi-modal networks. Code available at https://github.com/matfrei/CLIPMasterPrints.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access, 9:155161–155196, 2021.
  2. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp.  4845–4854, 2019.
  3. Deepmasterprints: Generating masterprints for dictionary attacks via latent variable evolution. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp.  1–9. IEEE, 2018.
  4. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069, 2018.
  5. Discovering the hidden vocabulary of DALLE-2. In NeurIPS 2022 Workshop on Score-Based Methods, 2022.
  6. Familiarity-based open-set recognition under adversarial attacks. In Challenges for Out-of-Distribution Generalization in Computer Vision (OOD-CV), 2023.
  7. Multimodal neurons in artificial neural networks. Distill, 6(3):e30, 2021.
  8. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  9. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014.
  10. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
  11. Nikolaus Hansen. The CMA evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772, 2016.
  12. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.
  13. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448–456, 2015.
  14. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
  15. Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
  16. Adversarial machine learning at scale. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  17. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security, pp.  99–112. Chapman and Hall/CRC, 2018.
  18. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  19. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International Conference on Machine Learning, pp. 12888–12900. PMLR, 2022.
  20. Adversarial VQA: A new benchmark for evaluating the robustness of vqa models. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp.  2042–2051, 2021.
  21. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. In Advances in Neural Information Processing Systems (NeurIPS), pp.  17612–17625, 2022.
  22. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations (ICLR), 2018.
  23. Mixed precision training. In Proceedings of the International Conference on Learning Representations (ICLR), 2018.
  24. George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
  25. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp.  427–436, 2015.
  26. Reading isn’t believing: Adversarial attacks on multi-modal neurons. arXiv preprint arXiv:2103.10480, 2021.
  27. Text-only training for image captioning using noise-injected CLIP. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp.  4055–4063, 2022.
  28. Mesut Ozdag. Adversarial attacks and defenses against deep neural networks: a survey. Procedia Computer Science, 140:152–161, 2018.
  29. Are multimodal models robust to image and text perturbations? arXiv preprint arXiv:2212.08044, 2022.
  30. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning (ICML), pp.  8748–8763. PMLR, 2021.
  31. High-resolution image synthesis with latent diffusion models. In Proceedings of Computer Vision and Pattern Recognition (CVPR), pp.  10684–10695, 2022.
  32. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
  33. LAION-400M: Open dataset of CLIP-filtered 400 million image-text pairs. In Data Centric AI NeurIPS Workshop, 2021.
  34. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of International Conference on Computer Vision (ICCV), pp.  618–626, 2017.
  35. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
  36. Kenneth O Stanley. Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines, 8(2):131–162, 2007.
  37. Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pp.  221–228, 2018.
  38. Visualizing and understanding convolutional networks. In Proceedings of European Conference on Computer Vision (ECCV), pp.  818–833. Springer, 2014.
  39. Sigmoid loss for language image pre-training. In Proceedings of Computer Vision and Pattern Recognition (CVPR), 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Matthias Freiberger (9 papers)
  2. Peter Kun (9 papers)
  3. Christian Igel (47 papers)
  4. Anders Sundnes Løvlie (18 papers)
  5. Sebastian Risi (77 papers)

Summary

Analyzing the Vulnerability of CLIP Models to Fooling Master Images

The paper "Fooling Contrastive Language-Image Pre-trained Models with CLIPMasterPrints" presents an in-depth analysis of the susceptibility of Contrastive Language-Image Pre-training (CLIP) models to specific adversarial examples termed "fooling master images" or "CLIPMasterPrints." These images are unique in that they can maximize the confidence score of a CLIP model across a range of diverse prompts, while appearing unrecognizable or unrelated to humans. This paper raises significant concerns regarding the robustness of CLIP models and similar multi-modal AI systems against adversarial attacks, with potential implications for their deployment in real-world applications.

Vulnerability of CLIP Models

The authors demonstrate that CLIP models, despite their utility in zero-shot image retrieval and learning tasks, can be substantially misled by CLIPMasterPrints. These adversarial images are capable of producing high confidence outputs for a multitude of prompts, thereby potentially disrupting the performance of CLIP-empowered systems. The paper meticulously details the process of mining these fooling master images using stochastic gradient descent (SGD), projected gradient descent (PGD), and blackbox optimization techniques.

One of the notable techniques discussed is the use of blackbox optimization, which does not require access to CLIP model weights. This method enhances the practical applicability of the attack, allowing adversaries to target CLIP models even in environments where model internals are non-transparent.

Generalization and Mitigation

A striking aspect of CLIPMasterPrints is their ability to generalize beyond the targeted prompts to semantically related captions. This generalization effect amplifies the attack's impact, making it a challenge for the wider use of CLIP systems. The paper provides empirical evidence showing that CLIPMasterPrints mined from a small number of image captions can generalize effectively to a larger set of semantically related captions.

To mitigate this vulnerability, the authors explore increasing model robustness through modifying the modality gap - the inherent misalignment between text and image embeddings in multi-modal networks. They propose shifting these embeddings strategically to reduce the adverse impacts of CLIPMasterPrints. Additionally, the paper suggests input sanitization strategies by automatically detecting and filtering out CLIPMasterPrints before they are processed by CLIP models.

Implications and Future Directions

The implications of this research are manifold. Practically, it underscores a crucial need for model robustness in contrastively trained multi-modal systems, particularly in applications where reliability and integrity are paramount. The theoretical insights into model vulnerability and generalization of adversarial examples may guide the development of more resilient architectures in the future.

The paper also opens avenues for further investigation into the optimization techniques for mining adversarial examples and integrating robust defenses in the training phases of these models. Future research could focus on refining these mitigation strategies to ensure they do not inadvertently degrade model performance while addressing the modality gap issues intrinsic to contrastive learning frameworks.

In conclusion, this paper provides a critical appraisal of the defenses necessary to safeguard CLIP models and potentially similar systems from sophisticated adversarial attacks, adding an essential layer to the ongoing discourse on AI safety and security.

Youtube Logo Streamline Icon: https://streamlinehq.com