Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unifying Graph Embedding Features with Graph Convolutional Networks for Skeleton-based Action Recognition (2003.03007v2)

Published 6 Mar 2020 in cs.CV, cs.LG, and eess.IV

Abstract: Combining skeleton structure with graph convolutional networks has achieved remarkable performance in human action recognition. Since current research focuses on designing basic graph for representing skeleton data, these embedding features contain basic topological information, which cannot learn more systematic perspectives from skeleton data. In this paper, we overcome this limitation by proposing a novel framework, which unifies 15 graph embedding features into the graph convolutional network for human action recognition, aiming to best take advantage of graph information to distinguish key joints, bones, and body parts in human action, instead of being exclusive to a single feature or domain. Additionally, we fully investigate how to find the best graph features of skeleton structure for improving human action recognition. Besides, the topological information of the skeleton sequence is explored to further enhance the performance in a multi-stream framework. Moreover, the unified graph features are extracted by the adaptive methods on the training process, which further yields improvements. Our model is validated by three large-scale datasets, namely NTU-RGB+D, Kinetics and SYSU-3D, and outperforms the state-of-the-art methods. Overall, our work unified graph embedding features to promotes systematic research on human action recognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dong Yang (163 papers)
  2. Monica Mengqi Li (1 paper)
  3. Hong Fu (6 papers)
  4. Jicong Fan (36 papers)
  5. Zhao Zhang (250 papers)
  6. Howard Leung (6 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.