Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CPL: Counterfactual Prompt Learning for Vision and Language Models (2210.10362v3)

Published 19 Oct 2022 in cs.CV, cs.AI, and cs.CL

Abstract: Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and LLMs such as CLIP. However, existing prompt tuning methods tend to learn spurious or entangled representations, which leads to poor generalization to unseen concepts. Towards non-spurious and efficient prompt learning from limited examples, this paper presents a novel \underline{\textbf{C}}ounterfactual \underline{\textbf{P}}rompt \underline{\textbf{L}}earning (CPL) method for vision and LLMs, which simultaneously employs counterfactual generation and contrastive learning in a joint optimization framework. Particularly, CPL constructs counterfactual by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causes concept change, and learns more generalizable prompt representation from both factual and counterfactual examples via contrastive learning. Extensive experiments demonstrate that CPL can obtain superior few-shot performance on different vision and language tasks than previous prompt tuning methods on CLIP. On image classification, we achieve 3.55\% average relative improvement on unseen classes across seven datasets; on image-text retrieval and visual question answering, we gain up to 4.09\% and 25.08\% relative improvements across three few-shot scenarios on unseen test sets respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xuehai He (26 papers)
  2. Diji Yang (10 papers)
  3. Weixi Feng (14 papers)
  4. Tsu-Jui Fu (35 papers)
  5. Arjun Akula (6 papers)
  6. Varun Jampani (125 papers)
  7. Pradyumna Narayana (12 papers)
  8. Sugato Basu (16 papers)
  9. William Yang Wang (254 papers)
  10. Xin Eric Wang (74 papers)
Citations (14)