Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RoboKeyGen: Robot Pose and Joint Angles Estimation via Diffusion-based 3D Keypoint Generation (2403.18259v1)

Published 27 Mar 2024 in cs.RO

Abstract: Estimating robot pose and joint angles is significant in advanced robotics, enabling applications like robot collaboration and online hand-eye calibration.However, the introduction of unknown joint angles makes prediction more complex than simple robot pose estimation, due to its higher dimensionality.Previous methods either regress 3D keypoints directly or utilise a render&compare strategy. These approaches often falter in terms of performance or efficiency and grapple with the cross-camera gap problem.This paper presents a novel framework that bifurcates the high-dimensional prediction task into two manageable subtasks: 2D keypoints detection and lifting 2D keypoints to 3D. This separation promises enhanced performance without sacrificing the efficiency innate to keypoint-based techniques.A vital component of our method is the lifting of 2D keypoints to 3D keypoints. Common deterministic regression methods may falter when faced with uncertainties from 2D detection errors or self-occlusions.Leveraging the robust modeling potential of diffusion models, we reframe this issue as a conditional 3D keypoints generation task. To bolster cross-camera adaptability, we introduce theNormalised Camera Coordinate Space (NCCS), ensuring alignment of estimated 2D keypoints across varying camera intrinsics.Experimental results demonstrate that the proposed method outperforms the state-of-the-art render&compare method and achieves higher inference speed.Furthermore, the tests accentuate our method's robust cross-camera generalisation capabilities.We intend to release both the dataset and code in https://nimolty.github.io/Robokeygen/

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Y. Rizk, M. Awad, and E. W. Tunstel, “Cooperative heterogeneous multi-robot systems,” ACM Computing Surveys (CSUR), vol. 52, pp. 1 – 31, 2019. [Online]. Available: https://api.semanticscholar.org/CorpusID:146012430
  2. T. Taunyazov, W. Sng, H. H. See, B. Z. H. Lim, J. Kuan, A. F. Ansari, B. C. K. Tee, and H. Soh, “Event-driven visual-tactile sensing and learning for robots,” ArXiv, vol. abs/2009.07083, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:220070303
  3. F. Chaumette, “Image moments : a general and useful set of features for visual servoing,” 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:6783563
  4. R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3d robotics hand/eye calibration,” IEEE Trans. Robotics Autom., vol. 5, pp. 345–358, 1988. [Online]. Available: https://api.semanticscholar.org/CorpusID:30068970
  5. T. E. Lee, J. Tremblay, T. To, J. Cheng, T. Mosier, O. Kroemer, D. Fox, and S. Birchfield, “Camera-to-robot pose estimation from a single image,” 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9426–9432, 2019. [Online]. Available: https://api.semanticscholar.org/CorpusID:208202164
  6. Y. Tian, J. Zhang, Z. Yin, and H. Dong, “Robot structure prior guided temporal attention for camera-to-robot pose estimation from image sequence,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8917–8926, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:260126044
  7. J. Lu, F. Richter, and M. C. Yip, “Markerless camera-to-robot pose estimation via self-supervised sim-to-real transfer,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21 296–21 306, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:257232804
  8. Y. Labb’e, J. Carpentier, M. Aubry, and J. Sivic, “Single-view robot pose and joint angle estimation via render & compare,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1654–1663, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:233296915
  9. A. Simoni, S. Pini, G. Borghi, and R. Vezzani, “Semi-perspective decoupled heatmaps for 3d robot pose estimation from depth maps,” IEEE Robotics and Automation Letters, vol. 7, pp. 11 569–11 576, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:250311091
  10. Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox, “Deepim: Deep iterative matching for 6d pose estimation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 683–698.
  11. S. Zakharov, I. Shugurov, and S. Ilic, “Dpod: 6d pose object detector and refiner,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1941–1950.
  12. H. Wang, S. Sridhar, J. Huang, J. Valentin, S. Song, and L. J. Guibas, “Normalized object coordinate space for category-level 6d object pose and size estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2642–2651.
  13. J. Zhang, M. Wu, and H. Dong, “Genpose: Generative category-level object pose estimation via diffusion models,” arXiv preprint arXiv:2306.10531, 2023.
  14. W. Hua, Z. Zhou, J. Wu, H. Huang, Y. Wang, and R. Xiong, “Rede: End-to-end object 6d pose robust estimation using differentiable outliers elimination,” IEEE Robotics and Automation Letters, vol. 6, pp. 2886–2893, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:225070704
  15. R. Cai, G. Yang, H. Averbuch-Elor, Z. Hao, S. J. Belongie, N. Snavely, and B. Hariharan, “Learning gradient fields for shape generation,” in European Conference on Computer Vision, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:221139756
  16. M.-Y. Wu, F. Zhong, Y. Xia, and H. Dong, “Targf: Learning target gradient field for object rearrangement,” ArXiv, vol. abs/2209.00853, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:252070636
  17. X. Lu, “A review of solutions for perspective-n-point problem in camera pose estimation,” Journal of Physics: Conference Series, vol. 1087, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:125876238
  18. Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” in Neural Information Processing Systems, 2019. [Online]. Available: https://api.semanticscholar.org/CorpusID:196470871
  19. P. Vincent, “A connection between score matching and denoising autoencoders,” Neural Computation, vol. 23, pp. 1661–1674, 2011. [Online]. Available: https://api.semanticscholar.org/CorpusID:5560643
  20. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” ArXiv, vol. abs/2006.11239, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:219955663
  21. J. N. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” ArXiv, vol. abs/1503.03585, 2015. [Online]. Available: https://api.semanticscholar.org/CorpusID:14888175
  22. Y. Song, J. N. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, “Score-based generative modeling through stochastic differential equations,” ArXiv, vol. abs/2011.13456, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:227209335
  23. Y. Song, L. Shen, L. Xing, and S. Ermon, “Solving inverse problems in medical imaging with score-based generative models,” ArXiv, vol. abs/2111.08005, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:244130146
  24. M. Wu, Y. Wang, H. Dong et al., “Example-based planning via dual gradient fields,” 2022.
  25. J. Zhang, M.-Y. Wu, and H. Dong, “Genpose: Generative category-level object pose estimation via diffusion models,” ArXiv, vol. abs/2306.10531, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:259202743
  26. H. Ci, M.-Y. Wu, W. Zhu, X. Ma, H. Dong, F. Zhong, and Y. Wang, “Gfpose: Learning 3d human pose prior with gradient fields,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4800–4810, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:254823445
  27. J. Xu, Z. Xiong, and S. Bhattacharyya, “Pidnet: A real-time semantic segmentation network inspired from pid controller,” ArXiv, vol. abs/2206.02066, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:249395578
  28. A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in NIPS, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:13756489
  29. T. Jiang, P. Lu, L. Zhang, N. Ma, R. Han, C. Lyu, Y. Li, and K. Chen, “Rtmpose: Real-time multi-person pose estimation based on mmpose,” ArXiv, vol. abs/2303.07399, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:257504954
  30. Y. Li, S. Yang, P. Liu, S. Zhang, Y. Wang, Z. Wang, W. Yang, and S. Xia, “Simcc: A simple coordinate classification perspective for human pose estimation,” in European Conference on Computer Vision, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:250280272
  31. Y. Song, C. Durkan, I. Murray, and S. Ermon, “Maximum likelihood training of score-based diffusion models,” in Neural Information Processing Systems, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:235352469
  32. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  33. J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” ArXiv, vol. abs/2010.02502, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:222140788
  34. Q. Dai, J. Zhang, Q. Li, T. Wu, H. Dong, Z. Liu, P. Tan, and H. Wang, “Domain randomization-enhanced depth simulation and restoration for perceiving and grasping specular and transparent objects,” in European Conference on Computer Vision, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:251402966
  35. “Blender,” https://www.blender.org/.
  36. J. L. Schonberger and J.-M. Frahm, “Structure-from-motion revisited,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4104–4113.
  37. J. Martinez, R. Hossain, J. Romero, and J. J. Little, “A simple yet effective baseline for 3d human pose estimation,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2640–2649.
  38. J. R. Dormand and P. J. Prince, “A family of embedded runge-kutta formulae,” Journal of Computational and Applied Mathematics, vol. 6, pp. 19–26, 1980. [Online]. Available: https://api.semanticscholar.org/CorpusID:122754533
Citations (2)

Summary

We haven't generated a summary for this paper yet.