Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Practical Sparse Approximation for Real Time Recurrent Learning (2006.07232v1)

Published 12 Jun 2020 in cs.LG, cs.NE, and stat.ML

Abstract: Current methods for training recurrent neural networks are based on backpropagation through time, which requires storing a complete history of network states, and prohibits updating the weights `online' (after every timestep). Real Time Recurrent Learning (RTRL) eliminates the need for history storage and allows for online weight updates, but does so at the expense of computational costs that are quartic in the state size. This renders RTRL training intractable for all but the smallest networks, even ones that are made highly sparse. We introduce the Sparse n-step Approximation (SnAp) to the RTRL influence matrix, which only keeps entries that are nonzero within n steps of the recurrent core. SnAp with n=1 is no more expensive than backpropagation, and we find that it substantially outperforms other RTRL approximations with comparable costs such as Unbiased Online Recurrent Optimization. For highly sparse networks, SnAp with n=2 remains tractable and can outperform backpropagation through time in terms of learning speed when updates are done online. SnAp becomes equivalent to RTRL when n is large.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jacob Menick (13 papers)
  2. Erich Elsen (28 papers)
  3. Utku Evci (25 papers)
  4. Simon Osindero (45 papers)
  5. Karen Simonyan (54 papers)
  6. Alex Graves (29 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.