Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robot Agnostic Visual Servoing considering kinematic constraints enabled by a decoupled network trajectory planner structure (2405.07017v3)

Published 11 May 2024 in cs.RO

Abstract: We propose a visual servoing method consisting of a detection network and a velocity trajectory planner. First, the detection network estimates the objects position and orientation in the image space. Furthermore, these are normalized and filtered. The direction and orientation is then the input to the trajectory planner, which considers the kinematic constrains of the used robotic system. This allows safe and stable control, since the kinematic boundary values are taken into account in planning. Also, by having direction estimation and velocity planner separated, the learning part of the method does not directly influence the control value. This also enables the transfer of the method to different robotic systems without retraining, therefore being robot agnostic. We evaluate our method on different visual servoing tasks with and without clutter on two different robotic systems. Our method achieved mean absolute position errors of <0.5 mm and orientation errors of <1{\deg}. Additionally, we transferred the method to a new system which differs in robot and camera, emphasizing robot agnostic capability of our method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. D. Kragic, H. I. Christensen, et al., “Survey on visual servoing for manipulation,” Computational Vision and Active Perception Laboratory, Fiskartorpsv, vol. 15, p. 2002, 2002.
  2. N. Andreff, B. Espiau, and R. Horaud, “Visual servoing from lines,” The International Journal of Robotics Research, vol. 21, no. 8, pp. 679–699, 2002. [Online]. Available: https://doi.org/10.1177/027836402761412430
  3. E. Malis, G. Chesi, and R. Cipolla, “212d visual servoing with respect to planar contours having complex and unknown shapes,” The International Journal of Robotics Research, vol. 22, no. 10-11, pp. 841–853, 2003. [Online]. Available: https://doi.org/10.1177/027836490302210004
  4. Y. Hu, P. Fua, W. Wang, and M. Salzmann, “Single-stage 6d object pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  5. C. Song, J. Song, and Q. Huang, “Hybridpose: 6d object pose estimation under hybrid representations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  6. S. Yan, X. Tao, and D. Xu, “Image-based visual servoing system for components alignment using point and line features,” IEEE Transactions on Instrumentation and Measurement, vol. 71, p. 1–11, 2022.
  7. A. Al-Shanoon and H. Lang, “Robotic manipulation based on 3-d visual servoing and deep neural networks,” Robotics and Autonomous Systems, vol. 152, p. 104041, 2022, citation Key: ALSHANOON2022104041.
  8. P. Katara, H. YVS, H. Pandya, A. Gupta, A. Sanchawala, G. Kumar, B. Bhowmick, and M. Krishna, “Deepmpcvs: Deep model predictive control for visual servoing,” in Proceedings of the 2020 Conference on Robot Learning, ser. Proceedings of Machine Learning Research, J. Kober, F. Ramos, and C. Tomlin, Eds., vol. 155.   PMLR, 16–18 Nov 2021, pp. 2006–2015. [Online]. Available: https://proceedings.mlr.press/v155/katara21a.html
  9. S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, p. 651–670, Oct. 1996.
  10. M. Costanzo, G. De Maria, C. Natale, and A. Russo, “Modeling and control of sampled-data image-based visual servoing with three-dimensional features,” IEEE Transactions on Control Systems Technology, vol. 32, no. 1, p. 31–46, Jan. 2024.
  11. B. Luo, H. Chen, F. Quan, S. Zhang, and Y. Liu, “Natural feature-based visual servoing for grasping target with an aerial manipulator,” Journal of Bionic Engineering, vol. 17, no. 2, p. 215–228, Mar. 2020.
  12. Q. Bateux, E. Marchand, J. Leitner, F. Chaumette, and P. Corke, “Visual servoing from deep neural networks,” no. arXiv:1705.08940, June 2017, arXiv:1705.08940 [cs]. [Online]. Available: http://arxiv.org/abs/1705.08940
  13. J. Wu, Z. Jin, A. Liu, L. Yu, and F. Yang, “A survey of learning-based control of robotic visual servoing systems,” Journal of the Franklin Institute, vol. 359, no. 1, p. 556–577, Jan. 2022.
  14. F. Tokuda, S. Arai, and K. Kosuge, “Convolutional neural network-based visual servoing for eye-to-hand manipulator,” IEEE Access, vol. 9, p. 91820–91835, 2021.
  15. E. Y. Puang, K. Peng Tee, and W. Jing, “Kovis: Keypoint-based visual servoing with zero-shot sim-to-real transfer for robotics manipulation,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   Las Vegas, NV, USA: IEEE, Oct. 2020, p. 7527–7533. [Online]. Available: https://ieeexplore.ieee.org/document/9341370/
  16. B.-S. Lu, T.-I. Chen, H.-Y. Lee, and W. H. Hsu, “Cfvs: Coarse-to-fine visual servoing for 6-dof object-agnostic peg-in-hole assembly,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   London, United Kingdom: IEEE, May 2023, p. 12402–12408. [Online]. Available: https://ieeexplore.ieee.org/document/10160525/
  17. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng, “Ros: an open-source robot operating system,” vol. 3, 01 2009.
  18. G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics YOLO,” Jan. 2023. [Online]. Available: https://github.com/ultralytics/ultralytics
  19. Y. Ma, X. Liu, J. Zhang, D. Xu, D. Zhang, and W. Wu, “Robotic grasping and alignment for small size components assembly based on visual servoing,” The International Journal of Advanced Manufacturing Technology, vol. 106, no. 11–12, p. 4827–4843, Feb. 2020.

Summary

We haven't generated a summary for this paper yet.