Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DTF-Net: Category-Level Pose Estimation and Shape Reconstruction via Deformable Template Field (2308.02239v1)

Published 4 Aug 2023 in cs.CV, cs.AI, and cs.RO

Abstract: Estimating 6D poses and reconstructing 3D shapes of objects in open-world scenes from RGB-depth image pairs is challenging. Many existing methods rely on learning geometric features that correspond to specific templates while disregarding shape variations and pose differences among objects in the same category. As a result, these methods underperform when handling unseen object instances in complex environments. In contrast, other approaches aim to achieve category-level estimation and reconstruction by leveraging normalized geometric structure priors, but the static prior-based reconstruction struggles with substantial intra-class variations. To solve these problems, we propose the DTF-Net, a novel framework for pose estimation and shape reconstruction based on implicit neural fields of object categories. In DTF-Net, we design a deformable template field to represent the general category-wise shape latent features and intra-category geometric deformation features. The field establishes continuous shape correspondences, deforming the category template into arbitrary observed instances to accomplish shape reconstruction. We introduce a pose regression module that shares the deformation features and template codes from the fields to estimate the accurate 6D pose of each object in the scene. We integrate a multi-modal representation extraction module to extract object features and semantic masks, enabling end-to-end inference. Moreover, during training, we implement a shape-invariant training strategy and a viewpoint sampling method to further enhance the model's capability to extract object pose features. Extensive experiments on the REAL275 and CAMERA25 datasets demonstrate the superiority of DTF-Net in both synthetic and real scenes. Furthermore, we show that DTF-Net effectively supports grasping tasks with a real robot arm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Haowen Wang (25 papers)
  2. Zhipeng Fan (9 papers)
  3. Zhen Zhao (85 papers)
  4. Zhengping Che (41 papers)
  5. Zhiyuan Xu (47 papers)
  6. Dong Liu (267 papers)
  7. Feifei Feng (23 papers)
  8. Yakun Huang (7 papers)
  9. Xiuquan Qiao (8 papers)
  10. Jian Tang (327 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.