Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Grasp Reset Mechanism: An Automated Apparatus for Conducting Grasping Trials (2402.18650v1)

Published 28 Feb 2024 in cs.RO

Abstract: Advancing robotic grasping and manipulation requires the ability to test algorithms and/or train learning models on large numbers of grasps. Towards the goal of more advanced grasping, we present the Grasp Reset Mechanism (GRM), a fully automated apparatus for conducting large-scale grasping trials. The GRM automates the process of resetting a grasping environment, repeatably placing an object in a fixed location and controllable 1-D orientation. It also collects data and swaps between multiple objects enabling robust dataset collection with no human intervention. We also present a standardized state machine interface for control, which allows for integration of most manipulators with minimal effort. In addition to the physical design and corresponding software, we include a dataset of 1,020 grasps. The grasps were created with a Kinova Gen3 robot arm and Robotiq 2F-85 Adaptive Gripper to enable training of learning models and to demonstrate the capabilities of the GRM. The dataset includes ranges of grasps conducted across four objects and a variety of orientations. Manipulator states, object pose, video, and grasp success data are provided for every trial.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Y. Li, Q. Lei, C. Cheng, G. Zhang, W. Wang, and Z. Xu, “A review: Machine learning on robotic grasping,” in Eleventh International Conference on Machine Vision (ICMV 2018), vol. 11041.   SPIE, 2019, pp. 775–783.
  2. A. Kadian, J. Truong, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee, M. Savva, S. Chernova, and D. Batra, “Sim2real predictivity: Does evaluation in simulation predict real-world performance?” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6670–6677, 2020.
  3. A. Sahbani, S. El-Khoury, and P. Bidaud, “An overview of 3d object grasp synthesis algorithms,” Robotics and Autonomous Systems, vol. 60, no. 3, pp. 326–336, 2012, autonomous Grasping.
  4. M. Danielczuk, J. Xu, J. Mahler, M. Matl, N. Chentanez, and K. Goldberg, “REACH: Reducing false negatives in robot grasp planning with a robust efficient area contact hypothesis model,” in Robotics Research, T. Asfour, E. Yoshida, J. Park, H. Christensen, and O. Khatib, Eds.   Springer International Publishing, 2022, pp. 757–772.
  5. S. Iqbal, J. Tremblay, T. To, J. Cheng, E. Leitch, A. Campbell, K. Leung, D. McKay, and S. Birchfield, “Directional semantic grasping of real-world objects: From simulation to reality,” CoRR, vol. abs/1909.02075, 2019.
  6. A. Depierre, E. Dellandréa, and L. Chen, “Jacquard: A large scale dataset for robotic grasp detection,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 3511–3516.
  7. S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” CoRR, vol. abs/1603.02199, 2016.
  8. L. Berscheid, P. Meißner, and T. Kröger, “Self-supervised learning for precise pick-and-place without object model,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4828–4835, 2020.
  9. L. Berscheid, T. Rühr, and T. Kröger, “Improving data efficiency of self-supervised learning for robotic grasping,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 2125–2131.
  10. L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 3406–3413.
  11. D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, “Scalable deep reinforcement learning for vision-based robotic manipulation,” in Proceedings of The 2nd Conference on Robot Learning, ser. Proceedings of Machine Learning Research, vol. 87.   PMLR, 29–31 Oct 2018, pp. 651–673.
  12. K. Kleeberger, R. Bormann, W. Kraus, and M. F. Huber, “A survey on learning-based robotic grasping,” Current Robotics Reports, vol. 1, no. 4, pp. 239–249, 2020.
  13. B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar, “The ycb object and model set: Towards common benchmarks for manipulation research,” in 2015 International Conference on Advanced Robotics (ICAR), 2015, pp. 510–517.
  14. K. Kleeberger, C. Landgraf, and M. F. Huber, “Large-scale 6d object pose estimation dataset for industrial bin-picking,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 2573–2578.
  15. M. Liarokapis and A. M. Dollar, “Deriving dexterous, in-hand manipulation primitives for adaptive robot hands,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 1951–1958.
  16. M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng, “Ros: an open-source robot operating system,” in Proc. of the IEEE Intl. Conf. on Robotics and Automation (ICRA) Workshop on Open Source Robotics, vol. 3, no. 3.2, May 2009.
  17. P. Schillinger, “Flexbe behavior engine.” [Online]. Available: http://philserver.bplaced.net/fbe/
  18. S. Garrido-Jurado, R. Muñoz-Salinas, F. Madrid-Cuevas, and M. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition, vol. 47, no. 6, pp. 2280–2292, 2014.
Citations (2)

Summary

We haven't generated a summary for this paper yet.