Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Non-saturating Recurrent Units for Modelling Long-term Dependencies (1902.06704v1)

Published 22 Jan 2019 in cs.NE, cs.LG, and stat.ML

Abstract: Modelling long-term dependencies is a challenge for recurrent neural networks. This is primarily due to the fact that gradients vanish during training, as the sequence length increases. Gradients can be attenuated by transition operators and are attenuated or dropped by activation functions. Canonical architectures like LSTM alleviate this issue by skipping information through a memory mechanism. We propose a new recurrent architecture (Non-saturating Recurrent Unit; NRU) that relies on a memory mechanism but forgoes both saturating activation functions and saturating gates, in order to further alleviate vanishing gradients. In a series of synthetic and real world tasks, we demonstrate that the proposed model is the only model that performs among the top 2 models across all tasks with and without long-term dependencies, when compared against a range of other architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sarath Chandar (93 papers)
  2. Chinnadhurai Sankar (23 papers)
  3. Eugene Vorontsov (19 papers)
  4. Samira Ebrahimi Kahou (50 papers)
  5. Yoshua Bengio (601 papers)
Citations (55)

Summary

We haven't generated a summary for this paper yet.