Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FRAGE: Frequency-Agnostic Word Representation (1809.06858v2)

Published 18 Sep 2018 in cs.CL, cs.LG, and stat.ML

Abstract: Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks. Although it is widely accepted that words with similar semantics should be close to each other in the embedding space, we find that word embeddings learned in several tasks are biased towards word frequency: the embeddings of high-frequency and low-frequency words lie in different subregions of the embedding space, and the embedding of a rare word and a popular word can be far from each other even if they are semantically similar. This makes learned word embeddings ineffective, especially for rare words, and consequently limits the performance of these neural network models. In this paper, we develop a neat, simple yet effective way to learn \emph{FRequency-AGnostic word Embedding} (FRAGE) using adversarial training. We conducted comprehensive studies on ten datasets across four natural language processing tasks, including word similarity, LLMing, machine translation and text classification. Results show that with FRAGE, we achieve higher performance than the baselines in all tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chengyue Gong (30 papers)
  2. Di He (108 papers)
  3. Xu Tan (164 papers)
  4. Tao Qin (201 papers)
  5. Liwei Wang (239 papers)
  6. Tie-Yan Liu (242 papers)
Citations (142)

Summary

We haven't generated a summary for this paper yet.