Papers
Topics
Authors
Recent
2000 character limit reached

Poster: Adapting Pretrained Vision Transformers with LoRA Against Attack Vectors (2506.00661v1)

Published 31 May 2025 in cs.CV

Abstract: Image classifiers, such as those used for autonomous vehicle navigation, are largely known to be susceptible to adversarial attacks that target the input image set. There is extensive discussion on adversarial attacks including perturbations that alter the input images to cause malicious misclassifications without perceivable modification. This work proposes a countermeasure for such attacks by adjusting the weights and classes of pretrained vision transformers with a low-rank adaptation to become more robust against adversarial attacks and allow for scalable fine-tuning without retraining.

Summary

Adapting Pretrained Vision Transformers with LoRA Against Attack Vectors

The paper "Adapting Pretrained Vision Transformers with LoRA Against Attack Vectors" presents a study on enhancing the robustness of Vision Transformers (ViTs) against adversarial perturbations via low-rank adaptation (LoRA). The research focuses on the vulnerability of image classifiers to adversarial attacks, particularly in applications involving autonomous vehicle systems. The paper introduces a novel approach utilizing LoRA to modify the weights of pre-trained ViTs, thereby improving their resistance to adversarial perturbations without requiring exhaustive retraining processes.

Overview

Vision Transformers, which have shown promising results in image-related tasks due to their global relationship modeling capabilities, are not inherently equipped with inductive biases and consequently demand substantial datasets for effective training. This requirement becomes a bottleneck when new data types or classes need to be integrated. The study leverages LoRA to append low-rank adaptations to pre-trained ViTs, allowing them to fine-tune effectively against adversarial inputs like Fast Gradient Sign Method (FGSM) perturbations.

Methodology

The pre-trained ViT model used in this research was Google's vit-base-patch16-224-in21k, initially trained on the ImageNet-21K dataset. The authors selected the Mapillary dataset for sourcing stop sign images to establish new classes within the model. FGSM was employed to simulate adversarial attacks, creating perturbations by applying scaled versions of the loss gradient onto the input image. Two LoRAs were appended to the ViT: the first adapted the model to recognize stop signs, while the second fine-tuned it to correctly classify perturbed stop signs.

Results

A rigorous evaluation method was devised, wherein the classification performance of ViTs—before and after the application of LoRAs—was benchmarked against adjusted FGSM perturbation strengths. Remarkably, the adapted ViT demonstrated a classification accuracy increase of up to 84.4% in scenarios involving perturbation attacks. This indicates the effectiveness of the approach in mitigating adversarial perturbations. The findings underscore the potential of LoRA for efficient adaptation, particularly when addressing the vulnerabilities induced by attack vectors in pretrained models.

Implications and Future Insights

The results have significant implications for deploying ViTs in practical applications, especially those requiring consistent performance in adversarial environments such as autonomous driving. The proposed method opens avenues for integrating efficient and scalable defense mechanisms in deep learning models without extensive computational overheads or retraining requirements.

Future work could explore the adaptation of ViTs against a broader spectrum of attack vectors, potentially involving more complex adversarial techniques or extending the dataset to achieve better model generalization. Additionally, investigating variants of ViT architectures could provide deeper insights into model performance variability before and after LoRA application.

In summary, the paper illustrates a promising strategy for enhancing the resilience of vision transformers against adversarial attacks, contributing to ongoing endeavors to secure autonomous systems against such vulnerabilities. The methodology employs scalable techniques, allowing for practical implementation in real-world scenarios where model retraining is minimal but impactful.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.