Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LVP-M3: Language-aware Visual Prompt for Multilingual Multimodal Machine Translation (2210.15461v2)

Published 19 Oct 2022 in cs.CL and cs.AI

Abstract: Multimodal Machine Translation (MMT) focuses on enhancing text-only translation with visual features, which has attracted considerable attention from both natural language processing and computer vision communities. Recent advances still struggle to train a separate model for each language pair, which is costly and unaffordable when the number of languages increases in the real world. In other words, the multilingual multimodal machine translation (Multilingual MMT) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for multiple languages. Besides, the image modality has no language boundaries, which is superior to bridging the semantic gap between languages. To this end, we first propose the Multilingual MMT task by establishing two new Multilingual MMT benchmark datasets covering seven languages. Then, an effective baseline LVP-M3 using visual prompts is proposed to support translations between different languages, which includes three stages (token encoding, language-aware visual prompt generation, and language translation). Extensive experimental results on our constructed benchmark datasets demonstrate the effectiveness of LVP-M3 method for Multilingual MMT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hongcheng Guo (39 papers)
  2. Jiaheng Liu (100 papers)
  3. Haoyang Huang (27 papers)
  4. Jian Yang (503 papers)
  5. Zhoujun Li (122 papers)
  6. Dongdong Zhang (79 papers)
  7. Zheng Cui (12 papers)
  8. Furu Wei (291 papers)
Citations (20)