Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DVGG: Deep Variational Grasp Generation for Dextrous Manipulation (2211.11154v1)

Published 21 Nov 2022 in cs.RO

Abstract: Grasping with anthropomorphic robotic hands involves much more hand-object interactions compared to parallel-jaw grippers. Modeling hand-object interactions is essential to the study of multi-finger hand dextrous manipulation. This work presents DVGG, an efficient grasp generation network that takes single-view observation as input and predicts high-quality grasp configurations for unknown objects. In general, our generative model consists of three components: 1) Point cloud completion for the target object based on the partial observation; 2) Diverse sets of grasps generation given the complete point cloud; 3) Iterative grasp pose refinement for physically plausible grasp optimization. To train our model, we build a large-scale grasping dataset that contains about 300 common object models with 1.5M annotated grasps in simulation. Experiments in simulation show that our model can predict robust grasp poses with a wide variety and high success rate. Real robot platform experiments demonstrate that the model trained on our dataset performs well in the real world. Remarkably, our method achieves a grasp success rate of 70.7\% for novel objects in the real robot platform, which is a significant improvement over the baseline methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wei Wei (425 papers)
  2. Daheng Li (1 paper)
  3. Peng Wang (832 papers)
  4. Yiming Li (199 papers)
  5. Wanyi Li (9 papers)
  6. Yongkang Luo (67 papers)
  7. Jun Zhong (13 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.