Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pre-training image-language transformers for open-vocabulary tasks (2209.04372v1)

Published 9 Sep 2022 in cs.CV

Abstract: We present a pre-training approach for vision and language transformer models, which is based on a mixture of diverse tasks. We explore both the use of image-text captioning data in pre-training, which does not need additional supervision, as well as object-aware strategies to pre-train the model. We evaluate the method on a number of textgenerative vision+language tasks, such as Visual Question Answering, visual entailment and captioning, and demonstrate large gains over standard pre-training methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. AJ Piergiovanni (40 papers)
  2. Weicheng Kuo (23 papers)
  3. Anelia Angelova (61 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.