Papers
Topics
Authors
Recent
Search
2000 character limit reached

Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB

Published 9 Dec 2017 in cs.CV | (1712.03453v3)

Abstract: We propose a new single-shot method for multi-person 3D pose estimation in general scenes from a monocular RGB camera. Our approach uses novel occlusion-robust pose-maps (ORPM) which enable full body pose inference even under strong partial occlusions by other people and objects in the scene. ORPM outputs a fixed number of maps which encode the 3D joint locations of all people in the scene. Body part associations allow us to infer 3D pose for an arbitrary number of people without explicit bounding box prediction. To train our approach we introduce MuCo-3DHP, the first large scale training data set showing real images of sophisticated multi-person interactions and occlusions. We synthesize a large corpus of multi-person images by compositing images of individual people (with ground truth from mutli-view performance capture). We evaluate our method on our new challenging 3D annotated multi-person test set MuPoTs-3D where we achieve state-of-the-art performance. To further stimulate research in multi-person 3D pose estimation, we will make our new datasets, and associated code publicly available for research purposes.

Citations (375)

Summary

  • The paper introduces occlusion-robust pose maps (ORPMs) to enable accurate 3D pose estimation under challenging occlusions in crowded scenes.
  • It employs a single-shot CNN architecture that jointly predicts 2D and 3D poses, eliminating the need for separate detection and post-processing steps.
  • The method outperforms existing techniques on benchmarks like MuPoTS-3D and MPI-INF-3DHP, achieving a 3DPCK of 65.0 and robust performance in dynamic scenarios.

An Analytical Overview of "Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB"

The paper "Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB" proposes a novel methodology for estimating the 3D poses of multiple individuals from a single RGB image without needing explicit post-processing steps often required in traditional methods. It introduces the concept of occlusion-robust pose-maps (ORPMs), which enable effective handling of inter-personal and object occlusions by incorporating redundancy in pose estimations per joint, improving robustness and efficiency.

Methodological Contributions

  1. Occlusion-Robust Pose-Maps (ORPMs): The core innovation involves using ORPMs which allow full body pose inference under occlusion. These maps are structured such that they maintain a fixed output size without requiring dynamic allocation per detected individual, thus enhancing estimation efficiency. By decomposing the human body into limbs and torso, the system can achieve redundancy without spatial conflicts in crowded scenes.
  2. Single-Shot CNN Architecture: The method employs a convolutional neural network (CNN) that jointly predicts 2D and 3D poses in one pass, eliminating the need for separate bounding-box predictions for each person. This one-step process contrasts with multi-phase approaches that require initial person detection, followed by pose estimation, which can propagate errors across phases.
  3. New Datasets: To train and evaluate their model, the authors introduce MuCo-3DHP, a synthesized dataset from composited real images featuring multiple people with known 3D poses. Additionally, MuPoTS-3D is presented as a challenging real-world multi-person test dataset that provides ground truth for validation in both indoor and outdoor settings.

Performance and Results

The authors compare their method extensively against existing techniques such as LCR-net and VNect on the new MuPoTS-3D dataset, showcasing superior performance, particularly in dynamic and occluded scenarios. The approach achieves state-of-the-art results with a 3DPCK of 65.0, demonstrating its robustness in diverse settings. Critically, it performs well even when compared to models designed for single-person 3D pose estimation, emphasizing the method's generalizability and accuracy.

On MPI-INF-3DHP, their method consistently outperforms major existing approaches, showing clear advantages in challenging postures like sitting and crouching, which are often problematic for traditional methods. This suggests that the ORPMs effectively address occlusions and can extrapolate accurate poses from limited visible data.

Theoretical and Practical Implications

The approach offers significant advancements in both the theoretical construct of pose estimation under occlusion and its practical application across varied real-world scenarios. While many existing systems struggle with occlusions, whether due to other people or environmental factors, the proposed ORPM framework not only addresses this challenge but does so with efficiency due to its single-shot inference approach.

In practical terms, this advancement moves towards more robust pose estimation in fields like human-computer interaction, sports analytics, and entertainment where accuracy under occlusion is pivotal. Furthermore, by making the datasets public, the authors provide a valuable resource for ongoing and future research in multi-person 3D pose estimation.

Future Directions

Despite the advantages, some limitations remain, such as potential failures when 2D joint detection is inaccurate or ambiguities arise in associating detected joints with individuals. Addressing these limitations could involve refining the handling of overlapping or proximate joints and improving the integration of 2D and 3D data. Additionally, further work could focus on adapting the methodology for real-time applications in more constrained settings or complex environments.

In summary, the paper presents a significant step in the evolution of 3D pose estimation, particularly within the challenging domain of crowded and occluded multi-person scenes. Its methodological innovations and practical implications extend the capability of existing systems, driving forward possibilities for future research and application in AI and computer vision.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.