Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation (2211.04861v1)

Published 9 Nov 2022 in cs.CV

Abstract: Recent cross-lingual cross-modal works attempt to extend Vision-Language Pre-training (VLP) models to non-English inputs and achieve impressive performance. However, these models focus only on understanding tasks utilizing encoder-only architecture. In this paper, we propose ERNIE-UniX2, a unified cross-lingual cross-modal pre-training framework for both generation and understanding tasks. ERNIE-UniX2 integrates multiple pre-training paradigms (e.g., contrastive learning and LLMing) based on encoder-decoder architecture and attempts to learn a better joint representation across languages and modalities. Furthermore, ERNIE-UniX2 can be seamlessly fine-tuned for varieties of generation and understanding downstream tasks. Pre-trained on both multilingual text-only and image-text datasets, ERNIE-UniX2 achieves SOTA results on various cross-lingual cross-modal generation and understanding tasks such as multimodal machine translation and multilingual visual question answering.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Bin Shan (11 papers)
  2. Yaqian Han (4 papers)
  3. Weichong Yin (8 papers)
  4. Shuohuan Wang (30 papers)
  5. Yu Sun (226 papers)
  6. Hao Tian (146 papers)
  7. Hua Wu (191 papers)
  8. Haifeng Wang (194 papers)
Citations (6)