Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Compute Word Embeddings On the Fly (1706.00286v3)

Published 1 Jun 2017 in cs.LG and cs.CL

Abstract: Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the "long tail" of this distribution requires enormous amounts of data. Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and LLMing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dzmitry Bahdanau (46 papers)
  2. Tom Bosc (4 papers)
  3. Stanisław Jastrzębski (31 papers)
  4. Edward Grefenstette (66 papers)
  5. Pascal Vincent (78 papers)
  6. Yoshua Bengio (601 papers)
Citations (82)

Summary

We haven't generated a summary for this paper yet.