Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text (2112.07074v1)

Published 14 Dec 2021 in cs.CV and cs.LG

Abstract: In this paper, we explore the possibility of building a unified foundation model that can be adapted to both vision-only and text-only tasks. Starting from BERT and ViT, we design a unified transformer consisting of modality-specific tokenizers, a shared transformer encoder, and task-specific output heads. To efficiently pre-train the proposed model jointly on unpaired images and text, we propose two novel techniques: (i) We employ the separately-trained BERT and ViT models as teachers and apply knowledge distillation to provide additional, accurate supervision signals for the joint training; (ii) We propose a novel gradient masking strategy to balance the parameter updates from the image and text pre-training losses. We evaluate the jointly pre-trained transformer by fine-tuning it on image classification tasks and natural language understanding tasks, respectively. The experiments show that the resultant unified foundation transformer works surprisingly well on both the vision-only and text-only tasks, and the proposed knowledge distillation and gradient masking strategy can effectively lift the performance to approach the level of separately-trained models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qing Li (430 papers)
  2. Boqing Gong (100 papers)
  3. Yin Cui (45 papers)
  4. Dan Kondratyuk (11 papers)
  5. Xianzhi Du (30 papers)
  6. Ming-Hsuan Yang (377 papers)
  7. Matthew Brown (33 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.