Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 398 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

ODHSR: Online Dense 3D Reconstruction of Humans and Scenes from Monocular Videos (2504.13167v2)

Published 17 Apr 2025 in cs.CV

Abstract: Creating a photorealistic scene and human reconstruction from a single monocular in-the-wild video figures prominently in the perception of a human-centric 3D world. Recent neural rendering advances have enabled holistic human-scene reconstruction but require pre-calibrated camera and human poses, and days of training time. In this work, we introduce a novel unified framework that simultaneously performs camera tracking, human pose estimation and human-scene reconstruction in an online fashion. 3D Gaussian Splatting is utilized to learn Gaussian primitives for humans and scenes efficiently, and reconstruction-based camera tracking and human pose estimation modules are designed to enable holistic understanding and effective disentanglement of pose and appearance. Specifically, we design a human deformation module to reconstruct the details and enhance generalizability to out-of-distribution poses faithfully. Aiming to learn the spatial correlation between human and scene accurately, we introduce occlusion-aware human silhouette rendering and monocular geometric priors, which further improve reconstruction quality. Experiments on the EMDB and NeuMan datasets demonstrate superior or on-par performance with existing methods in camera tracking, human pose estimation, novel view synthesis and runtime. Our project page is at https://eth-ait.github.io/ODHSR.

Summary

A Review of ODHSR: Online Dense 3D Reconstruction of Humans and Scenes from Monocular Videos

The paper "ODHSR: Online Dense 3D Reconstruction of Humans and Scenes from Monocular Videos" presents a unified framework that simultaneously executes camera tracking, human pose estimation, and dense human-scene reconstruction using monocular RGB input. The primary contribution lies in the real-time application of 3D Gaussian Splatting to provide efficient learning and reconstruction of the scene and humans without pre-calibrated cameras or extended training durations, unlike previous methods which can require days for training and pre-calibrated inputs.

Core Methodological Advances

ODHSR utilizes 3D Gaussian Splatting as the basis for creating Gaussian primitives of humans and scenes. The approach aims to generate a photorealistic portrayal by integrating several innovative modules:

  • Camera Tracking and Human Pose Estimation: ODHSR develops reconstruction-based modules to achieve decomposed understanding of pose and appearance. This is particularly enabled by leveraging monocular geometric priors and occlusion-aware human silhouette rendering, contributing to quality improvements in reconstruction.
  • Human Deformation Module: Focused efforts to improve generalizability to out-of-distribution poses by accurately reconstructing dynamic garments and human motion. This encompasses modeling the deformation into rigid and non-rigid components based on SMPL-based deformations.
  • Gaussian Splatting-based SLAM Pipeline: By integrating features from monocular geometric estimation, the pipeline efficiently manages camera tracking and mapping, emphasized by the use of keyframe selection for runtime optimization.

Strong Numerical Results and Implications

The experimentation conducted on EMDB and NeuMan datasets has demonstrated ODHSR's performance as superior or comparable to existing methods in several areas including camera tracking, pose estimation, novel view synthesis, and runtime efficiency. Specifically notable is ODHSR achieving a 75x speedup compared to previous approaches such as HSR that demand extensive computational resources.

These achievements have profound implications:

  • Theoretical: The paper bridges a substantial gap between monocular video inputs and high-fidelity 3D reconstructions without necessitating pre-calibrated setups. It opens avenues for exploring more dynamic human and scene simulations and integrations in real-time applications.
  • Practicality: ODHSR can significantly expedite development of autonomous systems that require precise human and environmental understanding, such as robotics, AR/VR, and surveillance systems. Its real-time capability endorses scenarios where live data processing is indispensable.

Future Directions

The methodologies employed invite several prospects for future exploration in AI-driven image processing and computer vision:

  • Enhanced Real-Time Adaptations: Optimizing algorithms to achieve even faster processing speeds without sacrificing reconstruction quality could immensely benefit interactive applications in virtual environments.
  • Scalability to Diverse Scenarios: Expanding the framework's capacity to hanlde varied lighting conditions, diverse scene dynamics, and more intricate human activities is fundamental for practical deployment.
  • Integration with Sensor-Based Data: The fusion of ODHSR with other sensor data could yield richer information models for environments, enhancing the robustness and accuracy of human-scene interaction insights.

In summary, ODHSR paves the way for practical, real-time dense reconstruction models that prioritize efficiency and accuracy, setting a significant precedent for ongoing advancements in human-centric 3D computer vision technologies. The paper serves as a critical point of reference for further developments aiming to balance computational feasibility with high-quality visual representation in dynamic settings.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 43 likes.

Upgrade to Pro to view all of the tweets about this paper: