Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SD-Net: Symmetric-Aware Keypoint Prediction and Domain Adaptation for 6D Pose Estimation In Bin-picking Scenarios (2403.09317v1)

Published 14 Mar 2024 in cs.CV and cs.AI

Abstract: Despite the success in 6D pose estimation in bin-picking scenarios, existing methods still struggle to produce accurate prediction results for symmetry objects and real world scenarios. The primary bottlenecks include 1) the ambiguity keypoints caused by object symmetries; 2) the domain gap between real and synthetic data. To circumvent these problem, we propose a new 6D pose estimation network with symmetric-aware keypoint prediction and self-training domain adaptation (SD-Net). SD-Net builds on pointwise keypoint regression and deep hough voting to perform reliable detection keypoint under clutter and occlusion. Specifically, at the keypoint prediction stage, we designe a robust 3D keypoints selection strategy considering the symmetry class of objects and equivalent keypoints, which facilitate locating 3D keypoints even in highly occluded scenes. Additionally, we build an effective filtering algorithm on predicted keypoint to dynamically eliminate multiple ambiguity and outlier keypoint candidates. At the domain adaptation stage, we propose the self-training framework using a student-teacher training scheme. To carefully distinguish reliable predictions, we harnesses a tailored heuristics for 3D geometry pseudo labelling based on semi-chamfer distance. On public Sil'eane dataset, SD-Net achieves state-of-the-art results, obtaining an average precision of 96%. Testing learning and generalization abilities on public Parametric datasets, SD-Net is 8% higher than the state-of-the-art method. The code is available at https://github.com/dingthuang/SD-Net.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. K. Kleeberger and M. F. Huber, “Single shot 6d object pose estimation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 6239–6245.
  2. K. Kleeberger, M. Völk, R. Bormann, and M. F. Huber, “Investigations on output parameterizations of neural networks for single shot 6d object pose estimation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 916–13 922.
  3. Z. Dong, S. Liu, T. Zhou, H. Cheng, L. Zeng, X. Yu, and H. Liu, “Ppr-net: point-wise pose regression network for instance segmentation and 6d pose estimation in bin-picking scenarios,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 1773–1780.
  4. S. Peng, Y. Liu, Q. Huang, X. Zhou, and H. Bao, “Pvnet: Pixel-wise voting network for 6dof pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4561–4570.
  5. Y. He, W. Sun, H. Huang, J. Liu, H. Fan, and J. Sun, “Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 632–11 641.
  6. W. Hua, Z. Zhou, J. Wu, H. Huang, Y. Wang, and R. Xiong, “Rede: End-to-end object 6d pose robust estimation using differentiable outliers elimination,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2886–2893, 2021.
  7. M. Rad and V. Lepetit, “Bb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 3828–3836.
  8. B. Tekin, S. N. Sinha, and P. Fua, “Real-time seamless single shot 6d object pose prediction,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 292–301.
  9. L. Zeng, W. J. Lv, X. Y. Zhang, and Y. J. Liu, “Parametricnet: 6dof pose estimation network for parametric shapes in stacked scenarios,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 772–778.
  10. G. Wang, F. Manhardt, J. Shao, X. Ji, N. Navab, and F. Tombari, “Self6d: Self-supervised monocular 6d object pose estimation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16.   Springer, 2020, pp. 108–125.
  11. W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab, “Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1521–1529.
  12. T. Hodaň, M. Sundermeyer, B. Drost, Y. Labbé, E. Brachmann, F. Michel, C. Rother, and J. Matas, “Bop challenge 2020 on 6d object localization,” in Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16.   Springer, 2020, pp. 577–594.
  13. H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, “Diverse image-to-image translation via disentangled representations,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 35–51.
  14. M. Rad, M. Oberweger, and V. Lepetit, “Domain transfer for 3d pose estimation from color images without manual annotations,” in Asian Conference on Computer Vision.   Springer, 2018, pp. 69–84.
  15. H. Chen, F. Manhardt, N. Navab, and B. Busam, “Texpose: Neural texture learning for self-supervised 6d object pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4841–4852.
  16. R. Brégier, F. Devernay, L. Leyrit, and J. L. Crowley, “Symmetry aware evaluation of 3d object detection and pose estimation in scenes of many parts in bulk,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 2209–2218.
  17. B. Drost, M. Ulrich, N. Navab, and S. Ilic, “Model globally, match locally: Efficient and robust 3d object recognition,” in 2010 IEEE computer society conference on computer vision and pattern recognition.   Ieee, 2010, pp. 998–1005.
  18. S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab, and V. Lepetit, “Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes,” in 2011 international conference on computer vision.   IEEE, 2011, pp. 858–865.
  19. S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, “Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes,” in Computer Vision–ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5-9, 2012, Revised Selected Papers, Part I 11.   Springer, 2013, pp. 548–562.
  20. C. Wang, D. Xu, Y. Zhu, R. Martín-Martín, C. Lu, L. Fei-Fei, and S. Savarese, “Densefusion: 6d object pose estimation by iterative dense fusion,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3343–3352.
  21. L. Zeng, W. J. Lv, Z. K. Dong, and Y. J. Liu, “Ppr-net++: accurate 6-d pose estimation in stacked scenarios,” IEEE Transactions on Automation Science and Engineering, vol. 19, no. 4, pp. 3139–3151, 2021.
  22. X. Zhang, W. Lv, and L. Zeng, “A 6dof pose estimation dataset and network for multiple parametric shapes in stacked scenarios,” Machines, vol. 9, no. 12, p. 321, 2021.
  23. J. Sock, K. I. Kim, C. Sahin, and T.-K. Kim, “Multi-task deep networks for depth-based 6d object pose and joint registration in crowd scenarios,” arXiv preprint arXiv:1806.03891, 2018.
  24. Y. He, H. Huang, H. Fan, Q. Chen, and J. Sun, “Ffb6d: A full flow bidirectional fusion network for 6d pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3003–3013.
  25. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the seventh IEEE international conference on computer vision, vol. 2.   Ieee, 1999, pp. 1150–1157.
  26. H. Liu, G. Liu, Y. Zhang, L. Lei, H. Xie, Y. Li, and S. Sun, “A 3d keypoints voting network for 6dof pose estimation in indoor scene,” Machines, vol. 9, no. 10, p. 230, 2021.
  27. M. Sundermeyer, Z.-C. Marton, M. Durner, M. Brucker, and R. Triebel, “Implicit 3d orientation learning for 6d object detection from rgb images,” in Proceedings of the european conference on computer vision (ECCV), 2018, pp. 699–715.
  28. S. Zakharov, W. Kehl, and S. Ilic, “Deceptionnet: Network-driven domain randomization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 532–541.
  29. J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, “Deep object pose estimation for semantic robotic grasping of household objects,” arXiv preprint arXiv:1809.10790, 2018.
  30. F. Manhardt, G. Wang, B. Busam, M. Nickel, S. Meier, L. Minciullo, X. Ji, and N. Navab, “Cps++: Improving class-level 6d pose and shape estimation from monocular images with self-supervised learning,” arXiv preprint arXiv:2003.05848, 2020.
  31. M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al., “A density-based algorithm for discovering clusters in large spatial databases with noise,” in kdd, vol. 96, no. 34, 1996, pp. 226–231.
  32. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, 2017.
  33. D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 5, pp. 603–619, 2002.
  34. K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3-d point sets,” IEEE Transactions on pattern analysis and machine intelligence, no. 5, pp. 698–700, 1987.
  35. R. Brégier, F. Devernay, L. Leyrit, and J. L. Crowley, “Defining the pose of any 3d rigid object and an associated distance,” International Journal of Computer Vision, vol. 126, no. 6, pp. 571–596, 2018.

Summary

We haven't generated a summary for this paper yet.