Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Plays a Pivotal Role in the Object-Attribute Compositional Generalization of CLIP (2403.18525v1)

Published 27 Mar 2024 in cs.CV, cs.CL, and cs.LG

Abstract: Vision-LLMs, such as CLIP, have shown promising Out-of-Distribution (OoD) generalization under various types of distribution shifts. Recent studies attempted to investigate the leading cause of this capability. In this work, we follow the same path, but focus on a specific type of OoD data - images with novel compositions of attribute-object pairs - and study whether such models can successfully classify those images into composition classes. We carefully designed an authentic image test dataset called ImageNet-AO, consisting of attributes for objects that are unlikely encountered in the CLIP training sets. We found that CLIPs trained with large datasets such as OpenAI CLIP, LAION-400M, and LAION-2B show orders-of-magnitude improvement in effective compositional OoD generalization compared to both supervised models and CLIPs trained with smaller datasets, such as CC-12M and YFCC-15M. Our results provide evidence that the scale and diversity of training data and language supervision play a key role in unlocking the compositional generalization abilities of vision-LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Reza Abbasi (8 papers)
  2. Mohammad Samiei (1 paper)
  3. Mohammad Hossein Rohban (43 papers)
  4. Mahdieh Soleymani Baghshah (50 papers)

Summary

We haven't generated a summary for this paper yet.