Cross-lingual Visual Pre-training for Multimodal Machine Translation (2101.10044v2)
Abstract: Pre-trained LLMs have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and visual pre-training methods. In this paper, we combine these two approaches to learn visually-grounded cross-lingual representations. Specifically, we extend the translation LLMling (Lample and Conneau, 2019) with masked region classification and perform pre-training with three-way parallel vision & language corpora. We show that when fine-tuned for multimodal machine translation, these models obtain state-of-the-art performance. We also provide qualitative insights into the usefulness of the learned grounded representations.
- Ozan Caglayan (20 papers)
- Menekse Kuyu (2 papers)
- Mustafa Sercan Amac (3 papers)
- Pranava Madhyastha (37 papers)
- Erkut Erdem (46 papers)
- Aykut Erdem (46 papers)
- Lucia Specia (68 papers)