Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling Recurrent Neural Network Language Models (1502.00512v1)

Published 2 Feb 2015 in cs.CL and cs.LG

Abstract: This paper investigates the scaling properties of Recurrent Neural Network LLMs (RNNLMs). We discuss how to train very large RNNs on GPUs and address the questions of how RNNLMs scale with respect to model size, training-set size, computational costs and memory. Our analysis shows that despite being more costly to train, RNNLMs obtain much lower perplexities on standard benchmarks than n-gram models. We train the largest known RNNs and present relative word error rates gains of 18% on an ASR task. We also present the new lowest perplexities on the recently released billion word LLMling benchmark, 1 BLEU point gain on machine translation and a 17% relative hit rate gain in word prediction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Will Williams (4 papers)
  2. Niranjani Prasad (5 papers)
  3. David Mrva (1 paper)
  4. Tom Ash (4 papers)
  5. Tony Robinson (3 papers)
Citations (71)

Summary

We haven't generated a summary for this paper yet.