Papers
Topics
Authors
Recent
2000 character limit reached

Fast Graph Convolutional Recurrent Neural Networks

Published 26 Jan 2020 in eess.SP | (2001.09407v1)

Abstract: This paper proposes a Fast Graph Convolutional Neural Network (FGRNN) architecture to predict sequences with an underlying graph structure. The proposed architecture addresses the limitations of the standard recurrent neural network (RNN), namely, vanishing and exploding gradients, causing numerical instabilities during training. State-of-the-art architectures that combine gated RNN architectures, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) with graph convolutions are known to improve the numerical stability during the training phase, but at the expense of the model size involving a large number of training parameters. FGRNN addresses this problem by adding a weighted residual connection with only two extra training parameters as compared to the standard RNN. Numerical experiments on the real 3D point cloud dataset corroborates the proposed architecture.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.