Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zoom-shot: Fast and Efficient Unsupervised Zero-Shot Transfer of CLIP to Vision Encoders with Multimodal Loss (2401.11633v1)

Published 22 Jan 2024 in cs.CV and cs.AI

Abstract: The fusion of vision and language has brought about a transformative shift in computer vision through the emergence of Vision-LLMs (VLMs). However, the resource-intensive nature of existing VLMs poses a significant challenge. We need an accessible method for developing the next generation of VLMs. To address this issue, we propose Zoom-shot, a novel method for transferring the zero-shot capabilities of CLIP to any pre-trained vision encoder. We do this by exploiting the multimodal information (i.e. text and image) present in the CLIP latent space through the use of specifically designed multimodal loss functions. These loss functions are (1) cycle-consistency loss and (2) our novel prompt-guided knowledge distillation loss (PG-KD). PG-KD combines the concept of knowledge distillation with CLIP's zero-shot classification, to capture the interactions between text and image features. With our multimodal losses, we train a $\textbf{linear mapping}$ between the CLIP latent space and the latent space of a pre-trained vision encoder, for only a $\textbf{single epoch}$. Furthermore, Zoom-shot is entirely unsupervised and is trained using $\textbf{unpaired}$ data. We test the zero-shot capabilities of a range of vision encoders augmented as new VLMs, on coarse and fine-grained classification datasets, outperforming the previous state-of-the-art in this problem domain. In our ablations, we find Zoom-shot allows for a trade-off between data and compute during training; and our state-of-the-art results can be obtained by reducing training from 20% to 1% of the ImageNet training data with 20 epochs. All code and models are available on GitHub.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Flamingo: a visual language model for few-shot learning. In NeurIPS, 2022.
  2. Emerging properties in self-supervised vision transformers. In ICCV, 2021a.
  3. Emerging properties in self-supervised vision transformers. In ICCV, 2021b.
  4. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
  5. Small visual language models can also be open-ended few-shot learners. arXiv preprint arXiv:2310.00500, 2023.
  6. Omnivore: A single model for many visual modalities. In CVPR, 2022.
  7. Imagebind one embedding space to bind them all. In CVPR, 2023.
  8. CyCLIP: Cyclic Contrastive Language-Image Pretraining. In NeurIPS, 2022.
  9. Deep residual learning for image recognition. In CVPR, 2015.
  10. Distilling the knowledge in a neural network. In NeurIPS, 2014.
  11. Searching for MobileNetV3. In ICCV, 2019.
  12. Densely connected convolutional networks. In CVPR, 2016.
  13. OpenCLIP, 2021.
  14. Mining better samples for constrastive learning of temporal correspondence. In CVPR, 2021.
  15. Contrastive Alignment of Vision to Language Through Parameter-Efficient Transfer Learning. In ICLR, 2023.
  16. Adam: A method for stochastic optimization. In ICLR, 2015.
  17. 3D object representations for fine-grained categorization. In ICCV Workshops, 2013.
  18. Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009.
  19. DLIP: Distilling language-image pre-training. arXiv preprint arXiv:2308.12956, 2023.
  20. Align before fuse: Vision and language representation learning with momentum distillation. In NeurIPS, 2021.
  21. An inverse scaling law for clip training. In NeurIPS, 2023.
  22. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. In NeurIPS, 2022.
  23. PolyViT: Co-training vision transformers on images, videos and audio. Trans. Mach. Learn. Res., 2023.
  24. Microsoft COCO: common objects in context. In ECCV, 2014.
  25. Unsupervised instance segmentation in microscopy images via panoptic domain adaptation and task re-weighting. In CVPR, 2020.
  26. SGDR: stochastic gradient descent with warm restarts. In ICLR, 2017.
  27. Linearly mapping from image to text space. In ICLR, 2023.
  28. Text-to-concept (and back) via cross-model alignment. In ICML, 2023.
  29. GLIDE: towards photorealistic image generation and editing with text-guided diffusion models. In ICML, 2022.
  30. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics and Image Processing, 2008.
  31. OpenAI. Gpt-4 technical report. Technical report, 2023.
  32. DINOv2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.
  33. Cats and dogs. In CVPR, 2012.
  34. Learning transferable visual models from natural language supervision. In ICLR, 2021.
  35. Hierarchical text-conditional image generation with CLIP latents. arXiv preprint arXiv:2204.06125, 2022.
  36. High-Resolution Image Synthesis with Latent Diffusion Models. In CVPR, 2022.
  37. Unsupervised learning of accurate siamese tracking. In CVPR, 2022.
  38. Towards understanding the modality gap in CLIP. In ICLR Workshops, 2023.
  39. Flava: A foundational language and vision alignment model. In CVPR, 2022.
  40. Eva-clip: Improved training techniques for clip at scale. arXiv preprint arXiv:2303.15389, 2023.
  41. The herbarium challenge 2019 dataset. In CVPR Workshop, 2019.
  42. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 2008.
  43. The caltech-ucsd birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
  44. MiniVLM: A smaller and faster vision-language model. arXiv preprint arXiv:2012.06946, 2021.
  45. Efficientvlm: Fast and accurate vision-language models via knowledge distillation and modal-adaptive pruning. arXiv preprint arXiv:2210.07795, 2022.
  46. Learning correspondence from the cycle-consistency of time. In CVPR, 2019.
  47. LiT: Zero-shot transfer with locked-image text tuning. In CVPR, 2022.
  48. Sigmoid loss for language image pre-training. In ICCV, 2023.
  49. Few-shot segmentation via cycle-consistent transformer. In NeurIPS, 2021.
  50. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jordan Shipard (4 papers)
  2. Arnold Wiliem (27 papers)
  3. Kien Nguyen Thanh (3 papers)
  4. Wei Xiang (106 papers)
  5. Clinton Fookes (148 papers)
Citations (2)