Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unified Multimodal Pre-training and Prompt-based Tuning for Vision-Language Understanding and Generation (2112.05587v2)

Published 10 Dec 2021 in cs.CV, cs.CL, and cs.LG

Abstract: Most existing vision-language pre-training methods focus on understanding tasks and use BERT-like objectives (masked LLMing and image-text matching) during pretraining. Although they perform well in many understanding downstream tasks, e.g., visual question answering, image-text retrieval and visual entailment, they do not possess the ability to generate. To tackle this problem, we propose Unified multimodal pre-training for both Vision-Language understanding and generation (UniVL). The proposed UniVL is capable of handling both understanding tasks and generative tasks. We augment existing pretraining paradigms that only use random masks with causal masks, i.e., triangular masks that mask out future tokens, such that the pre-trained models can have autoregressive generation abilities by design. We formulate several previous understanding tasks as a text generation task and propose to use prompt-based method for fine-tuning on different downstream tasks. Our experiments show that there is a trade-off between understanding tasks and generation tasks while using the same model, and a feasible way to improve both tasks is to use more data. Our UniVL framework attains comparable performance to recent vision-language pre-training methods on both understanding tasks and generation tasks. Moreover, we demostrate that prompt-based finetuning is more data-efficient - it outperforms discriminative methods in few-shot scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tianyi Liu (58 papers)
  2. Zuxuan Wu (144 papers)
  3. Wenhan Xiong (47 papers)
  4. Jingjing Chen (99 papers)
  5. Yu-Gang Jiang (223 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com