2000 character limit reached
A Mixture Model for Learning Multi-Sense Word Embeddings (1706.05111v1)
Published 15 Jun 2017 in cs.CL
Abstract: Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.