Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-lingual Visual Pre-training for Multimodal Machine Translation (2101.10044v2)

Published 25 Jan 2021 in cs.CL and cs.CV

Abstract: Pre-trained LLMs have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation LLMling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ozan Caglayan (20 papers)
  2. Menekse Kuyu (2 papers)
  3. Mustafa Sercan Amac (3 papers)
  4. Pranava Madhyastha (37 papers)
  5. Erkut Erdem (46 papers)
  6. Aykut Erdem (46 papers)
  7. Lucia Specia (68 papers)
Citations (40)