Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Less is More: Removing Text-regions Improves CLIP Training Efficiency and Robustness (2305.05095v1)

Published 8 May 2023 in cs.CV and cs.AI

Abstract: The CLIP (Contrastive Language-Image Pre-training) model and its variants are becoming the de facto backbone in many applications. However, training a CLIP model from hundreds of millions of image-text pairs can be prohibitively expensive. Furthermore, the conventional CLIP model doesn't differentiate between the visual semantics and meaning of text regions embedded in images. This can lead to non-robustness when the text in the embedded region doesn't match the image's visual appearance. In this paper, we discuss two effective approaches to improve the efficiency and robustness of CLIP training: (1) augmenting the training dataset while maintaining the same number of optimization steps, and (2) filtering out samples that contain text regions in the image. By doing so, we significantly improve the classification and retrieval accuracy on public benchmarks like ImageNet and CoCo. Filtering out images with text regions also protects the model from typographic attacks. To verify this, we build a new dataset named ImageNet with Adversarial Text Regions (ImageNet-Attr). Our filter-based CLIP model demonstrates a top-1 accuracy of 68.78\%, outperforming previous models whose accuracy was all below 50\%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Liangliang Cao (52 papers)
  2. Bowen Zhang (161 papers)
  3. Chen Chen (752 papers)
  4. Yinfei Yang (73 papers)
  5. Xianzhi Du (30 papers)
  6. Wencong Zhang (4 papers)
  7. Zhiyun Lu (19 papers)
  8. Yantao Zheng (3 papers)
Citations (13)