Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Adversarial Attacks and Defenses in Vision Transformers trained with DINO (2206.06761v4)

Published 14 Jun 2022 in cs.CV and cs.AI

Abstract: This work conducts the first analysis on the robustness against adversarial attacks on self-supervised Vision Transformers trained using DINO. First, we evaluate whether features learned through self-supervision are more robust to adversarial attacks than those emerging from supervised learning. Then, we present properties arising for attacks in the latent space. Finally, we evaluate whether three well-known defense strategies can increase adversarial robustness in downstream tasks by only fine-tuning the classification head to provide robustness even in view of limited compute resources. These defense strategies are: Adversarial Training, Ensemble Adversarial Training and Ensemble of Specialized Networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Javier Rando (21 papers)
  2. Nasib Naimi (1 paper)
  3. Thomas Baumann (6 papers)
  4. Max Mathys (3 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.