Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

One Representation per Word - Does it make Sense for Composition? (1702.06696v1)

Published 22 Feb 2017 in cs.CL

Abstract: In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf single-vector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Thomas Kober (12 papers)
  2. Julie Weeds (11 papers)
  3. John Wilkie (1 paper)
  4. Jeremy Reffin (5 papers)
  5. David Weir (15 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.