Papers
Topics
Authors
Recent
Search
2000 character limit reached

Artemis: Articulated Neural Pets with Appearance and Motion synthesis

Published 11 Feb 2022 in cs.GR and cs.CV | (2202.05628v3)

Abstract: We, humans, are entering into a virtual era and indeed want to bring animals to the virtual world as well for companion. Yet, computer-generated (CGI) furry animals are limited by tedious off-line rendering, let alone interactive motion control. In this paper, we present ARTEMIS, a novel neural modeling and rendering pipeline for generating ARTiculated neural pets with appEarance and Motion synthesIS. Our ARTEMIS enables interactive motion control, real-time animation, and photo-realistic rendering of furry animals. The core of our ARTEMIS is a neural-generated (NGI) animal engine, which adopts an efficient octree-based representation for animal animation and fur rendering. The animation then becomes equivalent to voxel-level deformation based on explicit skeletal warping. We further use a fast octree indexing and efficient volumetric rendering scheme to generate appearance and density features maps. Finally, we propose a novel shading network to generate high-fidelity details of appearance and opacity under novel poses from appearance and density feature maps. For the motion control module in ARTEMIS, we combine state-of-the-art animal motion capture approach with recent neural character control scheme. We introduce an effective optimization scheme to reconstruct the skeletal motion of real animals captured by a multi-view RGB and Vicon camera array. We feed all the captured motion into a neural character control scheme to generate abstract control signals with motion styles. We further integrate ARTEMIS into existing engines that support VR headsets, providing an unprecedented immersive experience where a user can intimately interact with a variety of virtual animals with vivid movements and photo-realistic appearance. We make available our ARTEMIS model and dynamic furry animal dataset at https://haiminluo.github.io/publication/artemis/.

Citations (11)

Summary

  • The paper presents ARTEMIS, a neural framework that achieves real-time synthesis of detailed appearance and dynamic motion for articulated furry pets.
  • It uses an octree-based neural representation and a dedicated shading network to capture fine fur details and explicit skeletal deformations.
  • Quantitative evaluations show improved PSNR, SSIM, and LPIPS metrics over baselines, highlighting its efficiency for interactive virtual environments.

Overview of "Artemis: Articulated Neural Pets with Appearance and Motion Synthesis"

This paper presents ARTEMIS, an innovative neural modeling and rendering framework designed to generate articulated neural pets capable of synthesizing both appearance and motion. The primary goal of ARTEMIS is to overcome the traditional limitations of creating photo-realistic computer-generated imagery (CGI) of furry animals, which typically involves laborious off-line processing and lacks interactivity. ARTEMIS extends the capability to the real-time, interactive domain, enabling users to engage with virtual animals in a more realistic manner.

Core Components and Methodology

ARTEMIS's innovation lies in its Neural-Generated Imagery (NGI) animal engine, which integrates multiple sophisticated techniques for efficient representation, animation, and rendering of animals.

  • Neural Representation and Rendering: ARTEMIS utilizes an octree-based approach to capture and render animal appearance and fur details. This involves transforming complex animal models into voxel-level animatable neural volumes using a combination of spherical harmonics and voxel features for appearance modeling. This encoding allows for explicit skeletal deformation and high-resolution real-time rendering.
  • Shading and Design: A distinctive feature of ARTEMIS is its neural shading network designed to render high-fidelity details of appearance and opacity. The shading network enhances spatial details through a convolutional architecture, maintaining photo-realism even under novel poses and lighting conditions.
  • Motion Control and Synthesis: For motion, ARTEMIS adopts a hybrid approach by integrating state-of-the-art motion capture and neural character control. This involves capturing precise skeletal motions using a combination of RGB and Vicon cameras, and then employing a neural model to generate motion dynamics under user guidance. The method supports seamless transition and control of different motion styles, allowing users to interact with virtual pets in an immersive environment.

Results and Evaluation

The authors present extensive validation of ARTEMIS through quantitative and qualitative assessments. Experiments demonstrate significant improvements over existing methods, particularly in terms of rendering quality and computational efficiency. Notably, ARTEMIS achieves impressive photo-realistic rendering of dynamic, furry animals in real-time, making it suitable for interactive applications such as virtual reality (VR).

Key performance metrics, such as PSNR, SSIM, and LPIPS, are used to evaluate rendering quality, with ARTEMIS outperforming baselines like NeuralVolumes and AnimatableNeRF in preserving fine details and minimizing artifacts. These results are further supported by runtime analyses showing ARTEMIS's ability to generate interactive animations at frame rates conducive to real-time applications.

Implications and Future Directions

The development of ARTEMIS has significant theoretical and practical implications for computer graphics and virtual interactivity. It showcases the potential for real-time neural rendering to close the gap between high-fidelity offline CGI and interactive applications. From a theoretical perspective, ARTEMIS offers insights into the effective use of neural networks for modeling complex dynamic systems, with possible extensions to human figures and other objects.

Practically, ARTEMIS paves the way for more accessible creation and manipulation of photo-realistic CG models, which is useful in fields ranging from entertainment to virtual training environments. The approach could be expanded with further integration into game engines and VR systems, allowing broader adoption in consumer markets.

Looking forward, future developments could include exploring real-time lighting and shading control to enhance realism under varying environmental conditions and expanding the neural engine to support more varied interaction scenarios. Moreover, enabling unsupervised learning from multi-view capture might reduce reliance on manual modeling, facilitating broader applicability in digital media production.

In conclusion, ARTEMIS represents a significant advancement in neural rendering technology, bridging the gap between high-quality CGI and interactive virtual environments, thus opening new avenues for both research and industry applications.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.