LightRNN: Memory and Computation-Efficient Recurrent Neural Networks (1610.09893v1)
Abstract: Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as LLMing and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need $2 \sqrt{|V|}$ vectors to represent a vocabulary of $|V|$ unique words, which are far less than the $|V|$ vectors required by existing approaches. Based on the 2-Component shared embedding, we design a new RNN algorithm and evaluate it using the LLMing task on several benchmark datasets. The results show that our algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art LLMs). Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable perplexity to previous LLMs, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. We name our proposed algorithm \emph{LightRNN} to reflect its very small model size and very high training speed.
- Xiang Li (1003 papers)
- Tao Qin (201 papers)
- Jian Yang (505 papers)
- Tie-Yan Liu (242 papers)