Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning (2106.01804v2)

Published 3 Jun 2021 in cs.CV, cs.AI, and cs.CL

Abstract: Vision-language pre-training (VLP) on large-scale image-text pairs has achieved huge success for the cross-modal downstream tasks. The most existing pre-training methods mainly adopt a two-step training procedure, which firstly employs a pre-trained object detector to extract region-based visual features, then concatenates the image representation and text embedding as the input of Transformer to train. However, these methods face problems of using task-specific visual representation of the specific object detector for generic cross-modal understanding, and the computation inefficiency of two-stage pipeline. In this paper, we propose the first end-to-end vision-language pre-trained model for both V+L understanding and generation, namely E2E-VLP, where we build a unified Transformer framework to jointly learn visual representation, and semantic alignments between image and text. We incorporate the tasks of object detection and image captioning into pre-training with a unified Transformer encoder-decoder architecture for enhancing visual learning. An extensive set of experiments have been conducted on well-established vision-language downstream tasks to demonstrate the effectiveness of this novel VLP paradigm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Haiyang Xu (67 papers)
  2. Ming Yan (190 papers)
  3. Chenliang Li (92 papers)
  4. Bin Bi (24 papers)
  5. Songfang Huang (51 papers)
  6. Wenming Xiao (2 papers)
  7. Fei Huang (408 papers)
Citations (115)