An Empirical Study of Training End-to-End Vision-and-Language Transformers (2111.02387v3)
Abstract: Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks. While recent work has shown that fully transformer-based VL models can be more efficient than previous region-feature-based methods, their performance on downstream tasks often degrades significantly. In this paper, we present METER, a Multimodal End-to-end TransformER framework, through which we investigate how to design and pre-train a fully transformer-based VL model in an end-to-end manner. Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIP-ViT, Swin transformer), text encoders (e.g., RoBERTa, DeBERTa), multimodal fusion module (e.g., merged attention vs. co-attention), architectural design (e.g., encoder-only vs. encoder-decoder), and pre-training objectives (e.g., masked image modeling). We conduct comprehensive experiments and provide insights on how to train a performant VL transformer. METER achieves an accuracy of 77.64% on the VQAv2 test-std set using only 4M images for pre-training, surpassing the state-of-the-art region-feature-based model by 1.04%, and outperforming the previous best fully transformer-based model by 1.6%. Notably, when further scaled up, our best VQA model achieves an accuracy of 80.54%. Code and pre-trained models are released at https://github.com/zdou0830/METER.
- Zi-Yi Dou (33 papers)
- Yichong Xu (42 papers)
- Zhe Gan (135 papers)
- Jianfeng Wang (149 papers)
- Shuohang Wang (69 papers)
- Lijuan Wang (133 papers)
- Chenguang Zhu (100 papers)
- Pengchuan Zhang (58 papers)
- Lu Yuan (130 papers)
- Nanyun Peng (205 papers)
- Zicheng Liu (153 papers)
- Michael Zeng (76 papers)