Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task (1910.03291v1)

Published 8 Oct 2019 in cs.CL, cs.IR, and cs.LG

Abstract: In this paper, we propose a new approach to learn multimodal multilingual embeddings for matching images and their relevant captions in two languages. We combine two existing objective functions to make images and captions close in a joint embedding space while adapting the alignment of word embeddings between existing languages in our model. We show that our approach enables better generalization, achieving state-of-the-art performance in text-to-image and image-to-text retrieval task, and caption-caption similarity task. Two multimodal multilingual datasets are used for evaluation: Multi30k with German and English captions and Microsoft-COCO with English and Japanese captions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Alireza Mohammadshahi (13 papers)
  2. Remi Lebret (23 papers)
  3. Karl Aberer (44 papers)
Citations (10)