Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accurate Word Representations with Universal Visual Guidance (2012.15086v1)

Published 30 Dec 2020 in cs.CL, cs.AI, and cs.CV

Abstract: Word representation is a fundamental component in neural language understanding models. Recently, pre-trained LLMs (PrLMs) offer a new performant method of contextualized word representations by leveraging the sequence-level context for modeling. Although the PrLMs generally give more accurate contextualized word representations than non-contextualized models do, they are still subject to a sequence of text contexts without diverse hints for word representation from multimodality. This paper thus proposes a visual representation method to explicitly enhance conventional word embedding with multiple-aspect senses from visual guidance. In detail, we build a small-scale word-image dictionary from a multimodal seed dataset where each word corresponds to diverse related images. The texts and paired images are encoded in parallel, followed by an attention layer to integrate the multimodal representations. We show that the method substantially improves the accuracy of disambiguation. Experiments on 12 natural language understanding and machine translation tasks further verify the effectiveness and the generalization capability of the proposed approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhuosheng Zhang (125 papers)
  2. Haojie Yu (4 papers)
  3. Hai Zhao (227 papers)
  4. Rui Wang (996 papers)
  5. Masao Utiyama (39 papers)