2000 character limit reached
Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds (1312.5559v3)
Published 19 Dec 2013 in cs.CL
Abstract: There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.