Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unseen Word Representation by Aligning Heterogeneous Lexical Semantic Spaces (1811.04983v1)

Published 12 Nov 2018 in cs.CL, cs.AI, and cs.LG

Abstract: Word embedding techniques heavily rely on the abundance of training data for individual words. Given the Zipfian distribution of words in natural language texts, a large number of words do not usually appear frequently or at all in the training data. In this paper we put forward a technique that exploits the knowledge encoded in lexical resources, such as WordNet, to induce embeddings for unseen words. Our approach adapts graph embedding and cross-lingual vector space transformation techniques in order to merge lexical knowledge encoded in ontologies with that derived from corpus statistics. We show that the approach can provide consistent performance improvements across multiple evaluation benchmarks: in-vitro, on multiple rare word similarity datasets, and in-vivo, in two downstream text classification tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Victor Prokhorov (9 papers)
  2. Mohammad Taher Pilehvar (43 papers)
  3. Dimitri Kartsaklis (24 papers)
  4. Nigel Collier (83 papers)
  5. Pietro Lio (69 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.