Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Refined Gate: A Simple and Effective Gating Mechanism for Recurrent Units (2002.11338v2)

Published 26 Feb 2020 in cs.CV, cs.LG, and cs.NE

Abstract: Recurrent neural network (RNN) has been widely studied in sequence learning tasks, while the mainstream models (e.g., LSTM and GRU) rely on the gating mechanism (in control of how information flows between hidden states). However, the vanilla gates in RNN (e.g., the input gate in LSTM) suffer from the problem of gate undertraining, which can be caused by various factors, such as the saturating activation functions, the gate layouts (e.g., the gate number and gating functions), or even the suboptimal memory state etc.. Those may result in failures of learning gating switch roles and thus the weak performance. In this paper, we propose a new gating mechanism within general gated recurrent neural networks to handle this issue. Specifically, the proposed gates directly short connect the extracted input features to the outputs of vanilla gates, denoted as refined gates. The refining mechanism allows enhancing gradient back-propagation as well as extending the gating activation scope, which can guide RNN to reach possibly deeper minima. We verify the proposed gating mechanism on three popular types of gated RNNs including LSTM, GRU and MGU. Extensive experiments on 3 synthetic tasks, 3 LLMing tasks and 5 scene text recognition benchmarks demonstrate the effectiveness of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhanzhan Cheng (28 papers)
  2. Yunlu Xu (18 papers)
  3. Mingjian Cheng (4 papers)
  4. Yu Qiao (563 papers)
  5. Shiliang Pu (106 papers)
  6. Yi Niu (38 papers)
  7. Fei Wu (317 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.