Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning (2003.12862v1)

Published 28 Mar 2020 in cs.CV and cs.LG

Abstract: Pretrained models from self-supervision are prevalently used in fine-tuning downstream tasks faster or for better accuracy. However, gaining robustness from pretraining is left unexplored. We introduce adversarial training into self-supervision, to provide general-purpose robust pre-trained models for the first time. We find these robust pre-trained models can benefit the subsequent fine-tuning in two ways: i) boosting final model robustness; ii) saving the computation cost, if proceeding towards adversarial fine-tuning. We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins (eg, 3.83% on robust accuracy and 1.3% on standard accuracy, on the CIFAR-10 dataset), compared with the conventional end-to-end adversarial training baseline. Moreover, we find that different self-supervised pre-trained models have a diverse adversarial vulnerability. It inspires us to ensemble several pretraining tasks, which boosts robustness more. Our ensemble strategy contributes to a further improvement of 3.59% on robust accuracy, while maintaining a slightly higher standard accuracy on CIFAR-10. Our codes are available at https://github.com/TAMU-VITA/Adv-SS-Pretraining.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tianlong Chen (202 papers)
  2. Sijia Liu (204 papers)
  3. Shiyu Chang (120 papers)
  4. Yu Cheng (354 papers)
  5. Lisa Amini (7 papers)
  6. Zhangyang Wang (375 papers)
Citations (232)

Summary

Adversarial Robustness in Self-Supervised Pretraining and Fine-Tuning

The paper "Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning" presents a novel exploration of leveraging adversarial robustness within the domain of self-supervised learning. Self-supervised learning has been a powerful tool in machine learning, enabling effective model pretraining without needing labeled datasets. However, the implications of adversarial threats within this context have not been thoroughly examined until now.

Overview

Mostly, pretrained models have been extensively utilized to enhance the efficiency and accuracy of fine-tuning for various downstream tasks. The introduction of adversarial training during the self-supervision phase is a critical step forward in ensuring that these models are not only accurate but also robust to adversarial perturbations. This paper addresses two key benefits of incorporating adversarial training in the self-supervised pretraining phase:

  1. Enhanced robustness in the resultant models following the fine-tuning process.
  2. Reduced computational costs, particularly when the subsequent task includes adversarial fine-tuning.

Key Findings

Through comprehensive experimentation, the authors provide evidence supporting the effectiveness of their proposed approach. Notably, when applied to the CIFAR-10 dataset, the proposed framework exhibits substantial improvements with an increase of 3.83% in robust accuracy and 1.3% in standard accuracy compared to traditional end-to-end adversarial training methods. Furthermore, their ensemble strategy, integrating several pretraining tasks, augments robustness by an additional 3.59% in robust accuracy while slightly enhancing standard accuracy.

Experimental Insights

  • Robust Fine-Tuning: Adversarial fine-tuning plays a more pivotal role in improving robustness compared to solely adversarial pretraining. The robust pretraining acts primarily to expedite fine-tuning procedures.
  • Self-Supervised Task Diversity: The paper reveals notable differences in adversarial vulnerability across different self-supervised pretrained models. This diversity suggests that models can leverage complementary strengths by ensembling different pretraining tasks.
  • Ensemble Strategy: By aggregating several self-supervised tasks during pretraining, the ensemble strategy further boosts the robust accuracy and sets a new benchmark for CIFAR-10 in the context of adversarial training.

Implications and Future Work

This paper extends the understanding of robust AI and offers a clear path toward more resilient machine learning systems. By incorporating adversarial elements within the self-supervised pretraining paradigm, there is potential to significantly enhance AI systems' defense against adversarial attacks. Future research could explore scaling these approaches to larger, high-resolution datasets and expanding the selection of self-supervised pretraining tasks. Additionally, exploring the computational efficiency versus robustness trade-offs across various neural architectures remains a promising avenue.

In conclusion, the paper contributes significantly to adversarial robustness in machine learning by illustrating how self-supervised pretraining, when combined with thoughtful adversarial training strategies, can achieve robust and computationally efficient machine learning models capable of better resisting adversarial challenges.