Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Internal representation dynamics and geometry in recurrent neural networks (2001.03255v2)

Published 9 Jan 2020 in cs.LG, cs.NE, and math.DS

Abstract: The efficiency of recurrent neural networks (RNNs) in dealing with sequential data has long been established. However, unlike deep, and convolution networks where we can attribute the recognition of a certain feature to every layer, it is unclear what "sub-task" a single recurrent step or layer accomplishes. Our work seeks to shed light onto how a vanilla RNN implements a simple classification task by analysing the dynamics of the network and the geometric properties of its hidden states. We find that early internal representations are evocative of the real labels of the data but this information is not directly accessible to the output layer. Furthermore the network's dynamics and the sequence length are both critical to correct classifications even when there is no additional task relevant information provided.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Stefan Horoi (8 papers)
  2. Guillaume Lajoie (58 papers)
  3. Guy Wolf (79 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.