A Sober Look at the Robustness of CLIPs to Spurious Features (2403.11497v2)
Abstract: Large vision LLMs, such as CLIP, demonstrate impressive robustness to spurious features than single-modal models trained on ImageNet. However, existing test datasets are typically curated based on ImageNet-trained models, which aim to capture the spurious features inherited in ImageNet. Benchmarking CLIP models based on the ImageNet-oriented spurious features may not be sufficient to reflect the extent to which CLIP models are robust to spurious correlations within CLIP training data, e.g., LAION. To this end, we craft a new challenging dataset named CounterAnimal designed to reveal the reliance of CLIP models on realistic spurious features. Specifically, we split animal photos into groups according to the backgrounds, and then identify a pair of groups for each class where a CLIP model shows high-performance drops across the two groups. Our evaluations show that the spurious features captured by CounterAnimal are generically learned by CLIP models with different backbones and pre-train data, yet have limited influence for ImageNet models. We provide theoretical insights that the CLIP objective cannot offer additional robustness. Furthermore, we also re-evaluate strategies such as scaling up parameters and high-quality pre-trained data. We find that they still help mitigate the spurious features, providing a promising path for future developments.
- Systematic generalisation with group invariant predictions. In ICLR, 2021.
- Flamingo: a visual language model for few-shot learning. In NeurIPS, 2022.
- Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478, 2023.
- Reproducible scaling laws for contrastive language-image learning. In CVPR, 2023.
- Identifiability results for multimodal contrastive learning. In ICLR, 2023.
- Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Data determines distributional robustness in contrastive language image pre-training (CLIP). In ICML, 2022.
- Data filtering networks. arXiv preprint arXiv:2309.17425, 2023.
- Datacomp: In search of the next generation of multimodal datasets. arXiv preprint arXiv:2304.14108, 2023.
- Finetune like you pretrain: Improved finetuning of zero-shot vision models. In CVPR, 2023.
- Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, 2021.
- Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, 2022.
- Evaluating object hallucination in large vision-language models. arXiv preprint, arXiv:2305.10355, 2023.
- Visual instruction tuning. In NeurIPS, 2023.
- Does CLIP’s generalization performance mainly stem from high train-test similarity? In NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models, 2023.
- Simple open-vocabulary object detection. In ECCV, 2022.
- Understanding multimodal contrastive learning and incorporating unpaired data. In AISTAT, 2023.
- Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.
- Combined scaling for zero-shot transfer learning. Neurocomputing, 555:126658, 2023.
- Learning transferable visual models from natural language supervision. In ICML, 2021.
- Distributionally robust neural networks. In ICLR, 2020a.
- An investigation of why overparameterization exacerbates spurious correlations. In ICML, 2020b.
- Is a caption worth a thousand images? a study on representation learning. In ICLR, 2023.
- Laion-5b: An open large-scale dataset for training next generation image-text models. In NeurIPS, 2022.
- Effective robustness against natural distribution shifts for models with different training data. In NeurIPS, 2023.
- Mass-producing failures of multimodal systems with language models. arXiv preprint arXiv:2306.12105, 2023.
- Eyes wide shut? exploring the visual shortcomings of multimodal llms. arXiv preprint arXiv:2401.06209, 2024.
- Robust fine-tuning of zero-shot models. In CVPR, 2022.
- Understanding the robustness of multi-modal contrastive learning to distribution shift. arXiv preprint arXiv:2310.04971, 2023.
- Mitigating spurious correlations in multi-modal models during fine-tuning. arXiv preprint arXiv:2304.03916, 2023.
- On the generalization of multi-modal contrastive learning. In ICML, 2023.
- Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
- Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
- Qizhou Wang (26 papers)
- Yong Lin (77 papers)
- Yongqiang Chen (32 papers)
- Ludwig Schmidt (80 papers)
- Bo Han (283 papers)
- Tong Zhang (570 papers)