CapS-Adapter: Caption-based MultiModal Adapter in Zero-Shot Classification (2405.16591v2)
Abstract: Recent advances in vision-language foundational models, such as CLIP, have demonstrated significant strides in zero-shot classification. However, the extensive parameterization of models like CLIP necessitates a resource-intensive fine-tuning process. In response, TIP-Adapter and SuS-X have introduced training-free methods aimed at bolstering the efficacy of downstream tasks. While these approaches incorporate support sets to maintain data distribution consistency between knowledge cache and test sets, they often fall short in terms of generalization on the test set, particularly when faced with test data exhibiting substantial distributional variations. In this work, we present CapS-Adapter, an innovative method that employs a caption-based support set, effectively harnessing both image and caption features to exceed existing state-of-the-art techniques in training-free scenarios. CapS-Adapter adeptly constructs support sets that closely mirror target distributions, utilizing instance-level distribution features extracted from multimodal large models. By leveraging CLIP's single and cross-modal strengths, CapS-Adapter enhances predictive accuracy through the use of multimodal support sets. Our method achieves outstanding zero-shot classification results across 19 benchmark datasets, improving accuracy by 2.19\% over the previous leading method. Our contributions are substantiated through extensive validation on multiple benchmark datasets, demonstrating superior performance and robust generalization capabilities. Our code is made publicly available at https://github.com/WLuLi/CapS-Adapter.
- Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
- Birdsnap: Large-scale fine-grained visual categorization of birds. In CVPR. 2011–2018.
- Food-101–mining discriminative components with random forests. In ECCV. 446–461.
- Language models are few-shot learners. NIPS 33 (2020), 1877–1901.
- Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793 (2023).
- Describing textures in the wild. In CVPR. 3606–3613.
- Embedding arithmetic of multimodal queries for image retrieval. In CVPR. 4950–4958.
- Imagenet: A large-scale hierarchical image database. In CVPR. 248–255.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).
- Data determines distributional robustness in contrastive language image pre-training (clip). In ICML. 6216–6234.
- Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In CVPR. 178–178.
- Clip-adapter: Better vision-language models with feature adapters. International Journal of Computer Vision 132, 2 (2024), 581–595.
- Caltech-256 object category dataset. (2007).
- Deep residual learning for image recognition. In CVPR. 770–778.
- Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12, 7 (2019), 2217–2226.
- The many faces of robustness: A critical analysis of out-of-distribution generalization. In ICCv. 8340–8349.
- Scaling up visual and vision-language representation learning with noisy text supervision. In ICML. 4904–4916.
- 3d object representations for fine-grained categorization. In ICCV. 554–561.
- Learning multiple layers of features from tiny images. (2009).
- BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. arXiv:2301.12597 [cs.CV]
- Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML. 12888–12900.
- Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744 (2023).
- Visual instruction tuning. NIPS 36 (2024).
- Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models. In ICCV. 21959–21969.
- Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151 (2013).
- Maria-Elena Nilsback and Andrew Zisserman. 2008. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing. IEEE, 722–729.
- Cats and dogs. In CVPR. 3498–3505.
- What does a platypus look like? Generating customized prompts for zero-shot image classification. arXiv:2209.03320 [cs.CV]
- Learning transferable visual models from natural language supervision. In ICML. 8748–8763.
- High-resolution image synthesis with latent diffusion models. In CVPR. 10684–10695.
- Laion-5b: An open large-scale dataset for training next generation image-text models. NIPS 35 (2022), 25278–25294.
- UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).
- Sus-x: Training-free name-only transfer of vision-language models. arXiv preprint arXiv:2211.16198 (2022).
- Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, 11 (2008).
- The caltech-ucsd birds-200-2011 dataset. (2011).
- Learning robust global representations by penalizing local predictive power. NIPS 32 (2019).
- Cogvlm: Visual expert for pretrained language models. arXiv preprint arXiv:2311.03079 (2023).
- Sun database: Large-scale scene recognition from abbey to zoo. In CVPR. 3485–3492.
- Task residual for tuning vision-language models. In CVPR. 10899–10909.
- Tip-adapter: Training-free adaption of clip for few-shot classification. In ECCV. 493–510.
- Conditional prompt learning for vision-language models. In CVPR. 16816–16825.
- Learning to prompt for vision-language models. International Journal of Computer Vision 130, 9 (2022), 2337–2348.
- Qijie Wang (10 papers)
- Guandu Liu (3 papers)
- Bin Wang (750 papers)