Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GalLoP: Learning Global and Local Prompts for Vision-Language Models (2407.01400v2)

Published 1 Jul 2024 in cs.CV

Abstract: Prompt learning has been widely adopted to efficiently adapt vision-LLMs (VLMs), e.g. CLIP, for few-shot image classification. Despite their success, most prompt learning methods trade-off between classification accuracy and robustness, e.g. in domain generalization or out-of-distribution (OOD) detection. In this work, we introduce Global-Local Prompts (GalLoP), a new prompt learning method that learns multiple diverse prompts leveraging both global and local visual features. The training of the local prompts relies on local features with an enhanced vision-text alignment. To focus only on pertinent features, this local alignment is coupled with a sparsity strategy in the selection of the local features. We enforce diversity on the set of prompts using a new ``prompt dropout'' technique and a multiscale strategy on the local prompts. GalLoP outperforms previous prompt learning methods on accuracy on eleven datasets in different few shots settings and with various backbones. Furthermore, GalLoP shows strong robustness performances in both domain generalization and OOD detection, even outperforming dedicated OOD detection methods. Code and instructions to reproduce our results: https://github.com/MarcLafon/gallop.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Marc Lafon (5 papers)
  2. Elias Ramzi (8 papers)
  3. Clément Rambour (13 papers)
  4. Nicolas Audebert (27 papers)
  5. Nicolas Thome (53 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.