Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Three Towers: Flexible Contrastive Learning with Pretrained Image Models (2305.16999v3)

Published 26 May 2023 in cs.CV, cs.AI, and cs.LG

Abstract: We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-LLMs by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits from training the image tower contrastively. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jannik Kossen (14 papers)
  2. Mark Collier (19 papers)
  3. Basil Mustafa (32 papers)
  4. Xiao Wang (507 papers)
  5. Xiaohua Zhai (51 papers)
  6. Lucas Beyer (46 papers)
  7. Andreas Steiner (17 papers)
  8. Jesse Berent (18 papers)
  9. Rodolphe Jenatton (41 papers)
  10. Efi Kokiopoulou (12 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com