Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Head Attention with Diversity for Learning Grounded Multilingual Multimodal Representations (1910.00058v1)

Published 30 Sep 2019 in cs.CL and cs.CV

Abstract: With the aim of promoting and understanding the multilingual version of image search, we leverage visual object detection and propose a model with diverse multi-head attention to learn grounded multilingual multimodal representations. Specifically, our model attends to different types of textual semantics in two languages and visual objects for fine-grained alignments between sentences and images. We introduce a new objective function which explicitly encourages attention diversity to learn an improved visual-semantic embedding space. We evaluate our model in the German-Image and English-Image matching tasks on the Multi30K dataset, and in the Semantic Textual Similarity task with the English descriptions of visual content. Results show that our model yields a significant performance gain over other methods in all of the three tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Po-Yao Huang (31 papers)
  2. Xiaojun Chang (148 papers)
  3. Alexander Hauptmann (46 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.