Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CAE v2: Context Autoencoder with CLIP Target (2211.09799v1)

Published 17 Nov 2022 in cs.CV

Abstract: Masked image modeling (MIM) learns visual representation by masking and reconstructing image patches. Applying the reconstruction supervision on the CLIP representation has been proven effective for MIM. However, it is still under-explored how CLIP supervision in MIM influences performance. To investigate strategies for refining the CLIP-targeted MIM, we study two critical elements in MIM, i.e., the supervision position and the mask ratio, and reveal two interesting perspectives, relying on our developed simple pipeline, context autodecoder with CLIP target (CAE v2). Firstly, we observe that the supervision on visible patches achieves remarkable performance, even better than that on masked patches, where the latter is the standard format in the existing MIM methods. Secondly, the optimal mask ratio positively correlates to the model size. That is to say, the smaller the model, the lower the mask ratio needs to be. Driven by these two discoveries, our simple and concise approach CAE v2 achieves superior performance on a series of downstream tasks. For example, a vanilla ViT-Large model achieves 81.7% and 86.7% top-1 accuracy on linear probing and fine-tuning on ImageNet-1K, and 55.9% mIoU on semantic segmentation on ADE20K with the pre-training for 300 epochs. We hope our findings can be helpful guidelines for the pre-training in the MIM area, especially for the small-scale models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Xinyu Zhang (296 papers)
  2. Jiahui Chen (72 papers)
  3. Junkun Yuan (19 papers)
  4. Qiang Chen (98 papers)
  5. Jian Wang (967 papers)
  6. Xiaodi Wang (15 papers)
  7. Shumin Han (18 papers)
  8. Xiaokang Chen (39 papers)
  9. Jimin Pi (6 papers)
  10. Kun Yao (32 papers)
  11. Junyu Han (53 papers)
  12. Errui Ding (156 papers)
  13. Jingdong Wang (236 papers)
Citations (25)