Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Visual Embedding for the Unsupervised Extraction of Abstract Semantics (1507.08818v6)

Published 31 Jul 2015 in cs.CV, cs.LG, and cs.NE

Abstract: Vector-space word representations obtained from neural network models have been shown to enable semantic operations based on vector arithmetic. In this paper, we explore the existence of similar information on vector representations of images. For that purpose we define a methodology to obtain large, sparse vector representations of image classes, and generate vectors through the state-of-the-art deep learning architecture GoogLeNet for 20K images obtained from ImageNet. We first evaluate the resultant vector-space semantics through its correlation with WordNet distances, and find vector distances to be strongly correlated with linguistic semantics. We then explore the location of images within the vector space, finding elements close in WordNet to be clustered together, regardless of significant visual variances (e.g. 118 dog types). More surprisingly, we find that the space unsupervisedly separates complex classes without prior knowledge (e.g. living things). Afterwards, we consider vector arithmetics. Although we are unable to obtain meaningful results on this regard, we discuss the various problem we encountered, and how we consider to solve them. Finally, we discuss the impact of our research for cognitive systems, focusing on the role of the architecture being used.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. D. Garcia-Gasulla (1 paper)
  2. J. Béjar (1 paper)
  3. U. Cortés (1 paper)
  4. E. Ayguadé (2 papers)
  5. J. Labarta (1 paper)
  6. T. Suzumura (1 paper)
  7. R. Chen (103 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.