Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera (1907.00837v2)

Published 1 Jul 2019 in cs.CV and cs.GR

Abstract: We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals.We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully connected neural network turns the possibly partial (on account of occlusion) 2Dpose and 3Dpose features for each subject into a complete 3Dpose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Dushyant Mehta (15 papers)
  2. Oleksandr Sotnychenko (8 papers)
  3. Franziska Mueller (16 papers)
  4. Weipeng Xu (44 papers)
  5. Mohamed Elgharib (38 papers)
  6. Pascal Fua (176 papers)
  7. Hans-Peter Seidel (68 papers)
  8. Helge Rhodin (54 papers)
  9. Gerard Pons-Moll (81 papers)
  10. Christian Theobalt (251 papers)
Citations (164)

Summary

  • The paper introduces a novel method for real-time 3D pose capture of multiple people using a single RGB camera, eliminating the need for complex multi-camera setups.
  • It presents an efficient CNN architecture, SelecSLS Net, that achieves over 30 fps by accurately extracting both 2D and 3D pose features from a single frame.
  • The system’s temporal refinement and benchmark performance highlight its potential for applications in animation, augmented reality, and interactive environments.

XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera

The paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera" presents a novel method for capturing 3D motion of multiple people using a single RGB camera, achieving real-time performance. Unlike traditional motion capture systems that require specialized sensors, multi-camera setups, or restrictive suits, this approach simplifies the setup significantly while maintaining robustness to occlusions and interactions in dynamic environments.

Method Overview

XNect's methodology consists of three key stages:

  1. Initial Pose Estimation: The process begins with a convolutional neural network (CNN), designed to infer both 2D and 3D pose features from a single frame. It uses a novel architecture, SelecSLS Net, which includes selective skip connections to optimize the flow of information without compromising speed. This architecture achieves real-time performance of over 30 frames per second (fps) at a resolution of 512x320 pixels.
  2. 3D Pose Estimation: Following the extraction of pose features, a fully-connected network processes these features to yield a complete 3D skeletal pose for each subject. The network reconciles potential conflicts due to occlusions by using body joint priors and observed joint confidences.
  3. Temporal Consistency and Refinement: To ensure temporal coherence and produce stable joint angles, a model-based skeleton fitting routine aligns the pose estimates over time. This stage handles the integration of joint angle predictions and refines the pose in real-time, capable of driving animated characters directly.

Empirical Performance

The system demonstrates state-of-the-art accuracy in both controlled datasets and complex real-world conditions. On benchmark tests such as the MuPoTS-3D dataset, XNect achieves significant accuracy improvements, competing closely with other top-performing systems while offering the advantage of real-time processing. The method also maintains robustness under challenging scenarios involving multiple interacting humans and complex occlusions.

The training scheme leverages deep neural networks with a carefully curated set of multi-person datasets, enabling the approach to generalize effectively across different configurations and motions encountered in the wild.

Architectural Innovations

SelecSLS Net, the core architecture of the initial CNN stage, stands out due to its efficiency and speed, surpassing conventional models like ResNet-50 in runtime. The interplay between selective long and short range skip connections in SelecSLS Net balances computational cost with network depth, ensuring high computational throughput without sacrificing accuracy.

Implications and Future Directions

This research has substantial practical implications in fields ranging from animation and augmented reality to human-computer interaction and sports science. The ability to robustly track multiple people in three dimensions using minimal hardware opens up new possibilities for seamless integration of motion capture in everyday applications and environments.

Theoretically, the introduction of scalable architectures like SelecSLS could inspire further developments in other areas of computer vision beyond pose estimation, enhancing efficiency in models that require real-time inference capabilities.

Future research may explore integrating additional sensors or utilizing advanced identity tracking to handle scenarios involving fast camera movements or large crowds, where identity maintenance becomes challenging. Further optimization in terms of temporal resolution and accuracy could make such systems even more ubiquitous in interactive scenarios.

Youtube Logo Streamline Icon: https://streamlinehq.com