Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linear Algebraic Structure of Word Senses, with Applications to Polysemy (1601.03764v6)

Published 14 Jan 2016 in cs.CL, cs.LG, and stat.ML

Abstract: Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (Arora et al., 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 "discourse atoms" that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sanjeev Arora (93 papers)
  2. Yuanzhi Li (119 papers)
  3. Yingyu Liang (107 papers)
  4. Tengyu Ma (117 papers)
  5. Andrej Risteski (58 papers)
Citations (237)

Summary

Linear Algebraic Structure of Word Senses, with Applications to Polysemy

The paper "Linear Algebraic Structure of Word Senses, with Applications to Polysemy" introduces a novel approach to understanding how word senses are embedded within conventional word embeddings, such as word2vec and GloVe. By examining the linear algebraic properties of these embeddings, the authors propose a method where multiple senses of a polysemous word can coexist within the same vector, with these senses being discernible through linear superposition.

Theoretical Insights

The central theoretical contribution of the paper is the Linearity Assertion. This assertion posits that the embeddings for polysemous words can be expressed as a linear combination of vectors corresponding to their senses, with the coefficients reflecting the relative frequency of these senses in the corpus. This insight is derived from a generative LLM that considers discourse as a dynamic entity represented by unit vectors interacting with word vectors through inner products.

The authors leverage a "random walk on discourses" model, a log-linear topic modeling framework adapted for dynamic contexts, to explain how such linearity emerges from nonlinear embedding techniques. This model conceptualizes discourses as continuous topics that generate corpus windows influenced by the cosine similarity of word vectors to these discourse vectors. To validate the Linearity Assertion, the authors employ a Gaussian random walk model, which demonstrates that the context vectors of words can be linearly transformed to approximate their corresponding embeddings. This transformation reflects an inherent linear structure within highly nonlinear methods such as word2vec and GloVe.

Empirical Validation

The theoretical claims are subjected to rigorous empirical validation. One key experiment involves creating pseudowords by substituting occurrences of pairs of distinct words in the text corpus and observing the linear reconstruction of these pseudoword embeddings. This experiment demonstrates that the obtained vector of a pseudoword is a near-linear combination of the original word vectors, supporting the Linearity Assertion.

Furthermore, the authors present tests involving hand-labeled word senses, leveraging WordNet definitions to construct sense vectors. The results show that these constructed sense vectors lie close to the polysemous word's vector in the embedding space, reinforcing the plausibility of the linear combination model.

Applications to Word Sense Induction (WSI)

Building on these theoretical foundations, the authors develop an unsupervised WSI method. This approach uses sparse coding on the word embeddings, treating each sense as a combination of "discourse atoms," which are unit vectors in the embedding space that succinctly capture co-occurring words. Sparse coding serves as a linear algebraic analog to classical clustering methods, allowing for the simultaneous extraction and representation of word senses across a vocabulary.

In practical evaluations against benchmark datasets like SemEval 2010 and the newly proposed police lineup test, the WSI method demonstrates competitive performance. The police lineup test, in particular, offers a nuanced evaluation by requiring algorithms to identify actual senses of a polysemous word from a distractor set, a task in which the proposed method performs comparably to non-native speakers.

Implications and Future Directions

The implications of this work extend both practically and theoretically. Practically, the ability to uncover multiple word senses in pre-trained embeddings without additional training data or changes to the embeddings is highly valuable for natural language processing tasks that rely on nuanced semantic understanding, such as disambiguation and contextual LLMs. Theoretically, unveiling the linear algebraic structure of senses in embeddings encourages further exploration into the latent semantic structures captured by these representations.

For future developments in AI, this work paves the way for more sophisticated language understanding systems that can dynamically interpret polysemy based on context, potentially enhancing machine comprehension and generation capabilities. Additionally, the concept of discourse atoms could stimulate novel methodologies in topic modeling and thematic content analysis across diverse text corpora. This paper stands as a testament to the rich structure that seemingly opaque machine learning models can reveal upon closer scrutiny through the lens of algebraic and probabilistic reasoning.