Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompt Tuning based Adapter for Vision-Language Model Adaption (2303.15234v1)

Published 24 Mar 2023 in cs.CV and cs.AI

Abstract: Large pre-trained vision-language (VL) models have shown significant promise in adapting to various downstream tasks. However, fine-tuning the entire network is challenging due to the massive number of model parameters. To address this issue, efficient adaptation methods such as prompt tuning have been proposed. We explore the idea of prompt tuning with multi-task pre-trained initialization and find it can significantly improve model performance. Based on our findings, we introduce a new model, termed Prompt-Adapter, that combines pre-trained prompt tunning with an efficient adaptation network. Our approach beat the state-of-the-art methods in few-shot image classification on the public 11 datasets, especially in settings with limited data instances such as 1 shot, 2 shots, 4 shots, and 8 shots images. Our proposed method demonstrates the promise of combining prompt tuning and parameter-efficient networks for efficient vision-LLM adaptation. The code is publicly available at: https://github.com/Jingchensun/prompt_adapter.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jingchen Sun (4 papers)
  2. Jiayu Qin (4 papers)
  3. Zihao Lin (22 papers)
  4. Changyou Chen (108 papers)
Citations (5)
Github Logo Streamline Icon: https://streamlinehq.com