Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Joint Word Representation Learning using a Corpus and a Semantic Lexicon (1511.06438v1)

Published 19 Nov 2015 in cs.CL and cs.AI

Abstract: Methods for learning word representations using large text corpora have received much attention lately due to their impressive performance in numerous NLP tasks such as, semantic similarity measurement, and word analogy detection. Despite their success, these data-driven word representation learning methods do not consider the rich semantic relational structure between words in a co-occurring context. On the other hand, already much manual effort has gone into the construction of semantic lexicons such as the WordNet that represent the meanings of words by defining the various relationships that exist among the words in a language. We consider the question, can we improve the word representations learnt using a corpora by integrating the knowledge from semantic lexicons?. For this purpose, we propose a joint word representation learning method that simultaneously predicts the co-occurrences of two words in a sentence subject to the relational constrains given by the semantic lexicon. We use relations that exist between words in the lexicon to regularize the word representations learnt from the corpus. Our proposed method statistically significantly outperforms previously proposed methods for incorporating semantic lexicons into word representations on several benchmark datasets for semantic similarity and word analogy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Danushka Bollegala (84 papers)
  2. Alsuhaibani Mohammed (1 paper)
  3. Takanori Maehara (44 papers)
  4. Ken-ichi Kawarabayashi (72 papers)
Citations (62)