Compressing Neural Language Models by Sparse Word Representations (1610.03950v1)
Abstract: Neural networks are among the state-of-the-art techniques for LLMing. Existing neural LLMs typically map discrete words to distributed, dense vector representations. After information processing of the preceding context words by hidden layers, an output layer estimates the probability of the next word. Such approaches are time- and memory-intensive because of the large numbers of parameters for word embeddings and the output layer. In this paper, we propose to compress neural LLMs by sparse word representations. In the experiments, the number of parameters in our model increases very slowly with the growth of the vocabulary size, which is almost imperceptible. Moreover, our approach not only reduces the parameter space to a large extent, but also improves the performance in terms of the perplexity measure.
- Yunchuan Chen (6 papers)
- Lili Mou (79 papers)
- Yan Xu (258 papers)
- Ge Li (213 papers)
- Zhi Jin (160 papers)