Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing Contextualized Sentence Representations with Visual Awareness (1911.02971v1)

Published 7 Nov 2019 in cs.CL, cs.CV, and cs.LG

Abstract: We present a universal framework to model contextualized sentence representations with visual awareness that is motivated to overcome the shortcomings of the multimodal parallel data with manual annotations. For each sentence, we first retrieve a diversity of images from a shared cross-modal embedding space, which is pre-trained on a large-scale of text-image pairs. Then, the texts and images are respectively encoded by transformer encoder and convolutional neural network. The two sequences of representations are further fused by a simple and effective attention layer. The architecture can be easily applied to text-only natural language processing tasks without manually annotating multimodal parallel corpora. We apply the proposed method on three tasks, including neural machine translation, natural language inference and sequence labeling and experimental results verify the effectiveness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhuosheng Zhang (125 papers)
  2. Rui Wang (996 papers)
  3. Kehai Chen (59 papers)
  4. Masao Utiyama (39 papers)
  5. Eiichiro Sumita (31 papers)
  6. Hai Zhao (227 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.