Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Random Projections for Language Modelling (1807.00930v4)

Published 2 Jul 2018 in cs.CL and cs.NE

Abstract: Neural network-based LLMs deal with data sparsity problems by mapping the large discrete space of words into a smaller continuous space of real-valued vectors. By learning distributed vector representations for words, each training sample informs the neural network model about a combinatorial number of other patterns. In this paper, we exploit the sparsity in natural language even further by encoding each unique input word using a fixed sparse random representation. These sparse codes are then projected onto a smaller embedding space which allows for the encoding of word occurrences from a possibly unknown vocabulary, along with the creation of more compact LLMs using a reduced number of parameters. We investigate the properties of our encoding mechanism empirically, by evaluating its performance on the widely used Penn Treebank corpus. We show that guaranteeing approximately equidistant (nearly orthogonal) vector representations for unique discrete inputs is enough to provide the neural network model with enough information to learn --and make use-- of distributed representations for these inputs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Davide Nunes (5 papers)
  2. Luis Antunes (7 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.