Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-local Recurrent Neural Memory for Supervised Sequence Modeling (1908.09535v1)

Published 26 Aug 2019 in cs.CV

Abstract: Typical methods for supervised sequence modeling are built upon the recurrent neural networks to capture temporal dependencies. One potential limitation of these methods is that they only model explicitly information interactions between adjacent time steps in a sequence, hence the high-order interactions between nonadjacent time steps are not fully exploited. It greatly limits the capability of modeling the long-range temporal dependencies since one-order interactions cannot be maintained for a long term due to information dilution and gradient vanishing. To tackle this limitation, we propose the Non-local Recurrent Neural Memory (NRNM) for supervised sequence modeling, which performs non-local operations to learn full-order interactions within a sliding temporal block and models global interactions between blocks in a gated recurrent manner. Consequently, our model is able to capture the long-range dependencies. Besides, the latent high-level features contained in high-order interactions can be distilled by our model. We demonstrate the merits of our NRNM on two different tasks: action recognition and sentiment analysis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Canmiao Fu (6 papers)
  2. Wenjie Pei (56 papers)
  3. Qiong Cao (26 papers)
  4. Chaopeng Zhang (7 papers)
  5. Yong Zhao (194 papers)
  6. Xiaoyong Shen (27 papers)
  7. Yu-Wing Tai (123 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.