Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training (2203.09052v1)

Published 17 Mar 2022 in cs.CV, cs.AI, and cs.CL

Abstract: Due to the limitations of the model structure and pre-training objectives, existing vision-and-language generation models cannot utilize pair-wise images and text through bi-directional generation. In this paper, we propose DU-VLG, a framework which unifies vision-and-language generation as sequence generation problems. DU-VLG is trained with novel dual pre-training tasks: multi-modal denoising autoencoder tasks and modality translation tasks. To bridge the gap between image understanding and generation, we further design a novel commitment loss. We compare pre-training objectives on image captioning and text-to-image generation datasets. Results show that DU-VLG yields better performance than variants trained with uni-directional generation objectives or the variant without the commitment loss. We also obtain higher scores compared to previous state-of-the-art systems on three vision-and-language generation tasks. In addition, human judges further confirm that our model generates real and relevant images as well as faithful and informative captions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Luyang Huang (8 papers)
  2. Guocheng Niu (5 papers)
  3. Jiachen Liu (45 papers)
  4. Xinyan Xiao (41 papers)
  5. Hua Wu (191 papers)
Citations (6)