DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training (2203.09052v1)
Abstract: Due to the limitations of the model structure and pre-training objectives, existing vision-and-language generation models cannot utilize pair-wise images and text through bi-directional generation. In this paper, we propose DU-VLG, a framework which unifies vision-and-language generation as sequence generation problems. DU-VLG is trained with novel dual pre-training tasks: multi-modal denoising autoencoder tasks and modality translation tasks. To bridge the gap between image understanding and generation, we further design a novel commitment loss. We compare pre-training objectives on image captioning and text-to-image generation datasets. Results show that DU-VLG yields better performance than variants trained with uni-directional generation objectives or the variant without the commitment loss. We also obtain higher scores compared to previous state-of-the-art systems on three vision-and-language generation tasks. In addition, human judges further confirm that our model generates real and relevant images as well as faithful and informative captions.
- Luyang Huang (8 papers)
- Guocheng Niu (5 papers)
- Jiachen Liu (45 papers)
- Xinyan Xiao (41 papers)
- Hua Wu (191 papers)