Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do We Really Need a Large Number of Visual Prompts? (2305.17223v2)

Published 26 May 2023 in cs.CV and cs.AI

Abstract: Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored. Among various methods, Visual Prompt Tuning (VPT), prepending learnable prompts to input space, shows competitive fine-tuning performance compared to training of full network parameters. However, VPT increases the number of input tokens, resulting in additional computational overhead. In this paper, we analyze the impact of the number of prompts on fine-tuning performance and self-attention operation in a vision transformer architecture. Through theoretical and empirical analysis we show that adding more prompts does not lead to linear performance improvement. Further, we propose a Prompt Condensation (PC) technique that aims to prevent performance degradation from using a small number of prompts. We validate our methods on FGVC and VTAB-1k tasks and show that our approach reduces the number of prompts by ~70% while maintaining accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Youngeun Kim (48 papers)
  2. Yuhang Li (102 papers)
  3. Abhishek Moitra (30 papers)
  4. Priyadarshini Panda (104 papers)
  5. Ruokai Yin (15 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.