Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multiplex Word Embeddings for Selectional Preference Acquisition (2001.02836v1)

Published 9 Jan 2020 in cs.CL

Abstract: Conventional word embeddings represent words with fixed vectors, which are usually trained based on co-occurrence patterns among words. In doing so, however, the power of such representations is limited, where the same word might be functionalized separately under different syntactic relations. To address this limitation, one solution is to incorporate relational dependencies of different words into their embeddings. Therefore, in this paper, we propose a multiplex word embedding model, which can be easily extended according to various relations among words. As a result, each word has a center embedding to represent its overall semantics, and several relational embeddings to represent its relational dependencies. Compared to existing models, our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness. Moreover, to accommodate various relations, we use a small dimension for relational embeddings and our model is able to keep their effectiveness. Experiments on selectional preference acquisition and word similarity demonstrate the effectiveness of the proposed model, and a further study of scalability also proves that our embeddings only need 1/20 of the original embedding size to achieve better performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hongming Zhang (111 papers)
  2. Jiaxin Bai (30 papers)
  3. Yan Song (91 papers)
  4. Kun Xu (277 papers)
  5. Changlong Yu (22 papers)
  6. Yangqiu Song (196 papers)
  7. Wilfred Ng (10 papers)
  8. Dong Yu (329 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.