Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable Semantic Space by Grounding Language to Vision with Cross-Modal Contrastive Learning (2111.07180v1)

Published 13 Nov 2021 in cs.CL and cs.LG

Abstract: In natural language processing, most models try to learn semantic representations merely from texts. The learned representations encode the distributional semantics but fail to connect to any knowledge about the physical world. In contrast, humans learn language by grounding concepts in perception and action and the brain encodes grounded semantics for cognition. Inspired by this notion and recent work in vision-language learning, we design a two-stream model for grounding language learning in vision. The model includes a VGG-based visual stream and a Bert-based language stream. The two streams merge into a joint representational space. Through cross-modal contrastive learning, the model first learns to align visual and language representations with the MS COCO dataset. The model further learns to retrieve visual objects with language queries through a cross-modal attention module and to infer the visual relations between the retrieved objects through a bilinear operator with the Visual Genome dataset. After training, the language stream of this model is a stand-alone LLM capable of embedding concepts in a visually grounded semantic space. This semantic space manifests principal dimensions explainable with human intuition and neurobiological knowledge. Word embeddings in this semantic space are predictive of human-defined norms of semantic features and are segregated into perceptually distinctive clusters. Furthermore, the visually grounded LLM also enables compositional language understanding based on visual knowledge and multimodal image search with queries based on images, texts, or their combinations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yizhen Zhang (11 papers)
  2. Minkyu Choi (16 papers)
  3. Kuan Han (6 papers)
  4. Zhongming Liu (9 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.