Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings (1906.03608v1)

Published 9 Jun 2019 in cs.CL and cs.LG

Abstract: Word embeddings typically represent different meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large dataset based on manual Wikipedia annotations and word senses, where word senses from different words are related by semantic classes. This is the basis for novel diagnostic tests for an embedding's content: we probe word embeddings for semantic classes and analyze the embedding space by classifying embeddings into semantic classes. Our main findings are: (i) Information about a sense is generally represented well in a single-vector embedding - if the sense is frequent. (ii) A classifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) Although rare senses are not well represented in single-vector embeddings, this does not have negative impact on an NLP application whose performance depends on frequent senses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yadollah Yaghoobzadeh (34 papers)
  2. Katharina Kann (50 papers)
  3. Timothy J. Hazen (6 papers)
  4. Eneko Agirre (53 papers)
  5. Hinrich Schütze (250 papers)
Citations (36)