Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-level Multimodal Common Semantic Space for Image-Phrase Grounding (1811.11683v2)

Published 28 Nov 2018 in cs.CV, cs.CL, cs.LG, and eess.IV

Abstract: We address the problem of phrase grounding by lear ing a multi-level common semantic space shared by the textual and visual modalities. We exploit multiple levels of feature maps of a Deep Convolutional Neural Network, as well as contextualized word and sentence embeddings extracted from a character-based LLM. Following dedicated non-linear mappings for visual features at each level, word, and sentence embeddings, we obtain multiple instantiations of our common semantic space in which comparisons between any target text and the visual content is performed with cosine similarity. We guide the model by a multi-level multimodal attention mechanism which outputs attended visual features at each level. The best level is chosen to be compared with text content for maximizing the pertinence scores of image-sentence pairs of the ground truth. Experiments conducted on three publicly available datasets show significant performance gains (20%-60% relative) over the state-of-the-art in phrase localization and set a new performance record on those datasets. We provide a detailed ablation study to show the contribution of each element of our approach and release our code on GitHub.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hassan Akbari (8 papers)
  2. Svebor Karaman (17 papers)
  3. Surabhi Bhargava (3 papers)
  4. Brian Chen (21 papers)
  5. Carl Vondrick (93 papers)
  6. Shih-Fu Chang (131 papers)
Citations (75)