Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training (2006.02635v4)

Published 4 Jun 2020 in cs.CL and cs.CV

Abstract: We present M3P, a Multitask Multilingual Multimodal Pre-trained model that combines multilingual pre-training and multimodal pre-training into a unified framework via multitask pre-training. Our goal is to learn universal representations that can map objects occurred in different modalities or texts expressed in different languages into a common semantic space. In addition, to explicitly encourage fine-grained alignment between images and non-English languages, we also propose Multimodal Code-switched Training (MCT) to combine monolingual pre-training and multimodal pre-training via a code-switch strategy. Experiments are performed on the multilingual image retrieval task across two benchmark datasets, including MSCOCO and Multi30K. M3P can achieve comparable results for English and new state-of-the-art results for non-English languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Minheng Ni (18 papers)
  2. Haoyang Huang (27 papers)
  3. Lin Su (12 papers)
  4. Edward Cui (5 papers)
  5. Taroon Bharti (6 papers)
  6. Lijuan Wang (133 papers)
  7. Jianfeng Gao (344 papers)
  8. Dongdong Zhang (79 papers)
  9. Nan Duan (172 papers)
Citations (8)