Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Controllable Human-Object Interaction Synthesis (2312.03913v2)

Published 6 Dec 2023 in cs.CV
Controllable Human-Object Interaction Synthesis

Abstract: Synthesizing semantic-aware, long-horizon, human-object interaction is critical to simulate realistic human behaviors. In this work, we address the challenging problem of generating synchronized object motion and human motion guided by language descriptions in 3D scenes. We propose Controllable Human-Object Interaction Synthesis (CHOIS), an approach that generates object motion and human motion simultaneously using a conditional diffusion model given a language description, initial object and human states, and sparse object waypoints. Here, language descriptions inform style and intent, and waypoints, which can be effectively extracted from high-level planning, ground the motion in the scene. Naively applying a diffusion model fails to predict object motion aligned with the input waypoints; it also cannot ensure the realism of interactions that require precise hand-object and human-floor contact. To overcome these problems, we introduce an object geometry loss as additional supervision to improve the matching between generated object motion and input object waypoints; we also design guidance terms to enforce contact constraints during the sampling process of the trained diffusion model. We demonstrate that our learned interaction module can synthesize realistic human-object interactions, adhering to provided textual descriptions and sparse waypoint conditions. Additionally, our module seamlessly integrates with a path planning module, enabling the generation of long-term interactions in 3D environments.

Introduction

Synthesizing human behaviors that interact with objects in a realistic manner within 3D environments is pivotal for advancements in diverse applications such as computer graphics, AI, and robotics. This work addresses the complex problem of generating simultaneous human and object motion from natural language descriptions, while addressing constraints imposed by initial states and environmental geometry.

Human-Object Interaction Synthesis

To intertwine human and object actions, this approach, termed Controllable Human-Object Interaction Synthesis (CHOIS), utilizes a conditional diffusion model. The model takes language input that signifies the intent and style of the interaction, an initial state of the object and human, and an outline of sparse object waypoints, which steer the motion within the context of the scene. While language enables specification of actions, waypoints ensure spatial anchoring of those actions. An innovative aspect of CHOIS is the use of an object geometry loss which refines the generated object motion to adhere to guided waypoints more accurately. Furthermore, guidance terms are introduced during training, enforcing realistic contact amidst the human and object and ensuring plausible interaction given environmental clutter.

Technique Details

CHOIS operates by encoding the geometry of objects using a Basis Point Set (BPS) and combining this with a masked motion condition vector that considers initial states and occasional 2D/3D object positions. A transformer-based denoising neural network is employed, taking these conditions and noisy representations of the desired end state to generate synchronized human and object motion. To bolster hand-object contact realism, a guidance function is applied that minimizes discrepancies during the inference phase without the need for explicit loss functions, which are typically expensive to compute and difficult to balance.

Evaluation and Applications

Assessed on datasets featuring diverse human-object interactions, CHOIS outperforms the adapted baselines and demonstrates effectiveness in generating realistic actions based on textual descriptions and various object sizes. An ablation paper highlights the significance of guidance terms in enhancing the accuracy of contact models and the fidelity of actions. By successfully integrating CHOIS within a pipeline, continuous long-horizon, environment-aware human-object interactions are synthesized given language inputs and encompassing 3D scenes. An application of this method showcases its ability to adhere to language prompts, adapt to different objects, manage sparse waypoints, and negotiate environments effectively.

In summary, CHOIS presents a significant step forward in the creation of dynamic human-object interactions in virtual scenarios, offering promising tools for the development of technically advanced systems capable of emulating human actions and decision-making processes with high accuracy and context-awareness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiaman Li (17 papers)
  2. Alexander Clegg (14 papers)
  3. Roozbeh Mottaghi (66 papers)
  4. Jiajun Wu (249 papers)
  5. Xavier Puig (14 papers)
  6. C. Karen Liu (93 papers)
Citations (27)
Youtube Logo Streamline Icon: https://streamlinehq.com