Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Anchor-based Robust Finetuning of Vision-Language Models (2404.06244v1)

Published 9 Apr 2024 in cs.CV

Abstract: We aim at finetuning a vision-LLM without hurting its out-of-distribution (OOD) generalization. We address two types of OOD generalization, i.e., i) domain shift such as natural to sketch images, and ii) zero-shot capability to recognize the category that was not contained in the finetune data. Arguably, the diminished OOD generalization after finetuning stems from the excessively simplified finetuning target, which only provides the class information, such as ``a photo of a [CLASS]''. This is distinct from the process in that CLIP was pretrained, where there is abundant text supervision with rich semantic information. Therefore, we propose to compensate for the finetune process using auxiliary supervision with rich semantic information, which acts as anchors to preserve the OOD generalization. Specifically, two types of anchors are elaborated in our method, including i) text-compensated anchor which uses the images from the finetune set but enriches the text supervision from a pretrained captioner, ii) image-text-pair anchor which is retrieved from the dataset similar to pretraining data of CLIP according to the downstream task, associating with the original CLIP text with rich semantics. Those anchors are utilized as auxiliary semantic information to maintain the original feature space of CLIP, thereby preserving the OOD generalization capabilities. Comprehensive experiments demonstrate that our method achieves in-distribution performance akin to conventional finetuning while attaining new state-of-the-art results on domain shift and zero-shot learning benchmarks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Prompt-based distribution alignment for unsupervised domain adaptation. arXiv:2312.09553, 2023.
  2. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In NeurIPS, 2019.
  3. The iwildcam 2020 competition dataset. arXiv:2004.10340, 2020.
  4. Food-101 - mining discriminative components with random forests. In ECCV, 2014.
  5. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. https://vicuna.lmsys.org, 2023.
  6. Describing textures in the wild. In CVPR, 2014.
  7. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  8. CLIP itself is a strong fine-tuner: Achieving 85.7% and 88.0% top-1 accuracy with vit-b and vit-l on imagenet. arXiv:2212.06138, 2022.
  9. Maskclip: Masked self-distillation advances contrastive language-image pretraining. In CVPR, 2023.
  10. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  11. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In CVPR Workshops, 2004.
  12. Finetune like you pretrain: Improved finetuning of zero-shot vision models. In CVPR, 2023.
  13. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. Remote Sensing, 2019.
  14. The many faces of robustness: A critical analysis of out-of-distribution generalization. In ICCV, 2021a.
  15. Natural adversarial examples. In CVPR, 2021b.
  16. Retrieval-enhanced contrastive vision-text models. arXiv:2306.07196, 2023.
  17. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, 2021.
  18. Billion-scale similarity search with gpus. IEEE Trans. Big Data, 2021.
  19. Maple: Multi-modal prompt learning. CVPR, 2023.
  20. WILDS: A benchmark of in-the-wild distribution shifts. In ICML, 2021.
  21. 3d object representations for fine-grained categorization. In ICCV Workshops, 2013.
  22. Fine-tuning can distort pretrained features and underperform out-of-distribution. In ICLR, 2022.
  23. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, 2022.
  24. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023a.
  25. Scaling language-image pre-training via masking. CVPR, 2023b.
  26. Fine-grained visual classification of aircraft. arXiv:1306.5151, 2013.
  27. Automated flower classification over a large number of classes. In ICVGIP, 2008.
  28. Cats and dogs. In CVPR, 2012.
  29. Moment matching for multi-source domain adaptation. In ICCV, 2019.
  30. Learning transferable visual models from natural language supervision. In ICML, 2021.
  31. Fine-tuned CLIP models are efficient video learners. CVPR, 2023.
  32. Do imagenet classifiers generalize to imagenet? In ICML, 2019.
  33. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018.
  34. FLAVA: A foundational language and vision alignment model. In CVPR, 2022.
  35. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv:1212.0402, 2012.
  36. Trainable projected gradient method for robust fine-tuning. In CVPR, 2023.
  37. Learning robust global representations by penalizing local predictive power. In NeurIPS, 2019.
  38. Boosting visual-language models by exploiting hard samples. arXiv:2305.05208, 2023.
  39. Robust fine-tuning of zero-shot models. In CVPR, 2022.
  40. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010.
  41. Ra-clip: Retrieval augmented contrastive language-image pre-training. In CVPR, 2023.
  42. Conditional prompt learning for vision-language models. In CVPR, 2022a.
  43. Learning to prompt for vision-language models. IJCV, 2022b.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jinwei Han (2 papers)
  2. Zhiwen Lin (6 papers)
  3. Zhongyisun Sun (1 paper)
  4. Yingguo Gao (4 papers)
  5. Ke Yan (102 papers)
  6. Shouhong Ding (90 papers)
  7. Yuan Gao (335 papers)
  8. Gui-Song Xia (139 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com