2000 character limit reached
Pre-training image-language transformers for open-vocabulary tasks (2209.04372v1)
Published 9 Sep 2022 in cs.CV
Abstract: We present a pre-training approach for vision and language transformer models, which is based on a mixture of diverse tasks. We explore both the use of image-text captioning data in pre-training, which does not need additional supervision, as well as object-aware strategies to pre-train the model. We evaluate the method on a number of textgenerative vision+language tasks, such as Visual Question Answering, visual entailment and captioning, and demonstrate large gains over standard pre-training methods.
- AJ Piergiovanni (40 papers)
- Weicheng Kuo (23 papers)
- Anelia Angelova (61 papers)