Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does CLIP Bind Concepts? Probing Compositionality in Large Image Models (2212.10537v3)

Published 20 Dec 2022 in cs.CV, cs.AI, and cs.CL

Abstract: Large-scale neural network models combining text and images have made incredible progress in recent years. However, it remains an open question to what extent such models encode compositional representations of the concepts over which they operate, such as correctly identifying "red cube" by reasoning over the constituents "red" and "cube". In this work, we focus on the ability of a large pretrained vision and LLM (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way (e.g., differentiating "cube behind sphere" from "sphere behind cube"). To inspect the performance of CLIP, we compare several architectures from research on compositional distributional semantics models (CDSMs), a line of research that attempts to implement traditional compositional linguistic structures within embedding spaces. We benchmark them on three synthetic datasets - single-object, two-object, and relational - designed to test concept binding. We find that CLIP can compose concepts in a single-object setting, but in situations where concept binding is needed, performance drops dramatically. At the same time, CDSMs also perform poorly, with best performance at chance level.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Martha Lewis (31 papers)
  2. Nihal V. Nayak (9 papers)
  3. Peilin Yu (9 papers)
  4. Qinan Yu (7 papers)
  5. Jack Merullo (15 papers)
  6. Stephen H. Bach (33 papers)
  7. Ellie Pavlick (66 papers)
Citations (45)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets