Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions (2010.12831v2)

Published 24 Oct 2020 in cs.CL, cs.CV, and cs.LG

Abstract: Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct ``mask-and-predict'' pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Liunian Harold Li (19 papers)
  2. Haoxuan You (33 papers)
  3. Zhecan Wang (18 papers)
  4. Alireza Zareian (16 papers)
  5. Shih-Fu Chang (131 papers)
  6. Kai-Wei Chang (292 papers)
Citations (12)