Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incorporating Visual Semantics into Sentence Representations within a Grounded Space (2002.02734v1)

Published 7 Feb 2020 in cs.CL

Abstract: Language grounding is an active field aiming at enriching textual representations with visual information. Generally, textual and visual elements are embedded in the same representation space, which implicitly assumes a one-to-one correspondence between modalities. This hypothesis does not hold when representing words, and becomes problematic when used to learn sentence representations --- the focus of this paper --- as a visual scene can be described by a wide variety of sentences. To overcome this limitation, we propose to transfer visual information to textual representations by learning an intermediate representation space: the grounded space. We further propose two new complementary objectives ensuring that (1) sentences associated with the same visual content are close in the grounded space and (2) similarities between related elements are preserved across modalities. We show that this model outperforms the previous state-of-the-art on classification and semantic relatedness tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Patrick Bordes (5 papers)
  2. Eloi Zablocki (36 papers)
  3. Laure Soulier (39 papers)
  4. Benjamin Piwowarski (38 papers)
  5. Patrick Gallinari (73 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.