Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning (2307.13770v1)

Published 25 Jul 2023 in cs.CV and cs.AI

Abstract: As the size of transformer-based models continues to grow, fine-tuning these large-scale pretrained vision models for new tasks has become increasingly parameter-intensive. Parameter-efficient learning has been developed to reduce the number of tunable parameters during fine-tuning. Although these methods show promising results, there is still a significant performance gap compared to full fine-tuning. To address this challenge, we propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation. Specifically, we introduce a set of learnable key-value prompts and visual prompts into self-attention and input layers, respectively, to improve the effectiveness of model fine-tuning. Moreover, we design a prompt pruning procedure to systematically prune low importance prompts while preserving model performance, which largely enhances the model's efficiency. Empirical results demonstrate that our approach outperforms several state-of-the-art baselines on two benchmarks, with considerably low parameter usage (e.g., 0.32% of model parameters on VTAB-1k). Our code is available at https://github.com/ChengHan111/E2VPT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Cheng Han (17 papers)
  2. Qifan Wang (129 papers)
  3. Yiming Cui (80 papers)
  4. Zhiwen Cao (10 papers)
  5. Wenguan Wang (103 papers)
  6. Siyuan Qi (34 papers)
  7. Dongfang Liu (44 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.