Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Visual Attention Grounding Neural Model for Multimodal Machine Translation (1808.08266v2)

Published 24 Aug 2018 in cs.CL

Abstract: We introduce a novel multimodal machine translation model that utilizes parallel visual and textual information. Our model jointly optimizes the learning of a shared visual-language embedding and a translator. The model leverages a visual attention grounding mechanism that links the visual semantics with the corresponding textual semantics. Our approach achieves competitive state-of-the-art results on the Multi30K and the Ambiguous COCO datasets. We also collected a new multilingual multimodal product description dataset to simulate a real-world international online shopping scenario. On this dataset, our visual attention grounding model outperforms other methods by a large margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mingyang Zhou (27 papers)
  2. Runxiang Cheng (4 papers)
  3. Yong Jae Lee (88 papers)
  4. Zhou Yu (206 papers)
Citations (78)

Summary

We haven't generated a summary for this paper yet.