Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

m3P: Towards Multimodal Multilingual Translation with Multimodal Prompt (2403.17556v1)

Published 26 Mar 2024 in cs.CL and cs.AI

Abstract: Multilingual translation supports multiple translation directions by projecting all languages in a shared space, but the translation quality is undermined by the difference between languages in the text-only modality, especially when the number of languages is large. To bridge this gap, we introduce visual context as the universal language-independent representation to facilitate multilingual translation. In this paper, we propose a framework to leverage the multimodal prompt to guide the Multimodal Multilingual neural Machine Translation (m3P), which aligns the representations of different languages with the same meaning and generates the conditional vision-language memory for translation. We construct a multilingual multimodal instruction dataset (InstrMulti102) to support 102 languages. Our method aims to minimize the representation distance of different languages by regarding the image as a central language. Experimental results show that m3P outperforms previous text-only baselines and multilingual multimodal methods by a large margin. Furthermore, the probing experiments validate the effectiveness of our method in enhancing translation under the low-resource and massively multilingual scenario.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jian Yang (505 papers)
  2. Hongcheng Guo (39 papers)
  3. Yuwei Yin (21 papers)
  4. Jiaqi Bai (19 papers)
  5. Bing Wang (246 papers)
  6. Jiaheng Liu (100 papers)
  7. Xinnian Liang (20 papers)
  8. Linzheng Cahi (1 paper)
  9. Liqun Yang (18 papers)
  10. Zhoujun Li (122 papers)
Citations (7)