Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Benchmarking Robust Self-Supervised Learning Across Diverse Downstream Tasks (2407.12588v2)

Published 17 Jul 2024 in cs.CV and cs.AI

Abstract: Large-scale vision models have become integral in many applications due to their unprecedented performance and versatility across downstream tasks. However, the robustness of these foundation models has primarily been explored for a single task, namely image classification. The vulnerability of other common vision tasks, such as semantic segmentation and depth estimation, remains largely unknown. We present a comprehensive empirical evaluation of the adversarial robustness of self-supervised vision encoders across multiple downstream tasks. Our attacks operate in the encoder embedding space and at the downstream task output level. In both cases, current state-of-the-art adversarial fine-tuning techniques tested only for classification significantly degrade clean and robust performance on other tasks. Since the purpose of a foundation model is to cater to multiple applications at once, our findings reveal the need to enhance encoder robustness more broadly. Our code is available at ${github.com/layer6ai-labs/ssl-robustness}$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Antoni Kowalczuk (4 papers)
  2. Jan DubiƄski (16 papers)
  3. Atiyeh Ashari Ghomi (5 papers)
  4. Yi Sui (16 papers)
  5. George Stein (28 papers)
  6. Jiapeng Wu (8 papers)
  7. Jesse C. Cresswell (39 papers)
  8. Franziska Boenisch (40 papers)
  9. Adam Dziedzic (47 papers)

Summary

We haven't generated a summary for this paper yet.