Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks? (2403.12693v1)

Published 19 Mar 2024 in cs.CV

Abstract: Foundation models pre-trained on web-scale vision-language data, such as CLIP, are widely used as cornerstones of powerful machine learning systems. While pre-training offers clear advantages for downstream learning, it also endows downstream models with shared adversarial vulnerabilities that can be easily identified through the open-sourced foundation model. In this work, we expose such vulnerabilities in CLIP's downstream models and show that foundation models can serve as a basis for attacking their downstream systems. In particular, we propose a simple yet effective adversarial attack strategy termed Patch Representation Misalignment (PRM). Solely based on open-sourced CLIP vision encoders, this method produces adversaries that simultaneously fool more than 20 downstream models spanning 4 common vision-language tasks (semantic segmentation, object detection, image captioning and visual question-answering). Our findings highlight the concerning safety risks introduced by the extensive usage of public foundational models in the development of downstream systems, calling for extra caution in these scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Anjun Hu (6 papers)
  2. Jindong Gu (101 papers)
  3. Francesco Pinto (18 papers)
  4. Konstantinos Kamnitsas (50 papers)
  5. Philip Torr (172 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.