Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Motion Feature Network: Fixed Motion Filter for Action Recognition (1807.10037v2)

Published 26 Jul 2018 in cs.CV

Abstract: Spatio-temporal representations in frame sequences play an important role in the task of action recognition. Previously, a method of using optical flow as a temporal information in combination with a set of RGB images that contain spatial information has shown great performance enhancement in the action recognition tasks. However, it has an expensive computational cost and requires two-stream (RGB and optical flow) framework. In this paper, we propose MFNet (Motion Feature Network) containing motion blocks which make it possible to encode spatio-temporal information between adjacent frames in a unified network that can be trained end-to-end. The motion block can be attached to any existing CNN-based action recognition frameworks with only a small additional cost. We evaluated our network on two of the action recognition datasets (Jester and Something-Something) and achieved competitive performances for both datasets by training the networks from scratch.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Myunggi Lee (5 papers)
  2. Seungeui Lee (2 papers)
  3. Nojun Kwak (116 papers)
  4. GyuTae Park (5 papers)
  5. SungJoon Son (2 papers)
Citations (119)

Summary

We haven't generated a summary for this paper yet.