Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When are Lemons Purple? The Concept Association Bias of Vision-Language Models (2212.12043v2)

Published 22 Dec 2022 in cs.CV, cs.CL, and cs.LG

Abstract: Large-scale vision-LLMs such as CLIP have shown impressive performance on zero-shot image classification and image-to-text retrieval. However, such performance does not realize in tasks that require a finer-grained correspondence between vision and language, such as Visual Question Answering (VQA). As a potential cause of the difficulty of applying these models to VQA and similar tasks, we report an interesting phenomenon of vision-LLMs, which we call the Concept Association Bias (CAB). We find that models with CAB tend to treat input as a bag of concepts and attempt to fill in the other missing concept crossmodally, leading to an unexpected zero-shot prediction. We demonstrate CAB by showing that CLIP's zero-shot classification performance greatly suffers when there is a strong concept association between an object (e.g. eggplant) and an attribute (e.g. color purple). We also show that the strength of CAB predicts the performance on VQA. We observe that CAB is prevalent in vision-LLMs trained with contrastive losses, even when autoregressive losses are jointly employed. However, a model that solely relies on autoregressive loss seems to exhibit minimal or no signs of CAB.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yutaro Yamada (13 papers)
  2. Yingtian Tang (5 papers)
  3. Ilker Yildirim (13 papers)
  4. Yoyo Zhang (1 paper)
Citations (7)
Youtube Logo Streamline Icon: https://streamlinehq.com