Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards vision-based dual arm robotic fruit harvesting (2306.08729v2)

Published 14 Jun 2023 in cs.RO

Abstract: Interest in agricultural robotics has increased considerably in recent years due to benefits such as improvement in productivity and labor reduction. However, current problems associated with unstructured environments make the development of robotic harvesters challenging. Most research in agricultural robotics focuses on single arm manipulation. Here, we propose a dual-arm approach. We present a dual-arm fruit harvesting robot equipped with a RGB-D camera, cutting and collecting tools. We exploit the cooperative task description to maximize the capabilities of the dual-arm robot. We designed a Hierarchical Quadratic Programming based control strategy to fulfill the set of hard constrains related to the robot and environment: robot joint limits, robot self-collisions, robot-fruit and robot-tree collisions. We combine deep learning and standard image processing algorithms to detect and track fruits as well as the tree trunk in the scene. We validate our perception methods on real-world RGB-D images and our control method on simulated experiments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. E. Karan and S. Asgari, “Resilience of food, energy, and water systems to a sudden labor shortage,” Environment Systems and Decisions, vol. 41, no. 1, pp. 63–81, 2021.
  2. G. Kootstra, X. Wang, P. M. Blok, J. Hemming, and E. Van Henten, “Selective harvesting robotics: current research, trends, and future directions,” Current Robotics Reports, vol. 2, no. 1, pp. 95–104, 2021.
  3. P. Chiacchio, S. Chiaverini, and B. Siciliano, “Direct and Inverse Kinematics for Coordinated Motion Tasks of a Two-Manipulator System,” Journal of Dynamic Systems, Measurement, and Control, vol. 118, no. 4, pp. 691–697, 12 1996.
  4. A. Cherubini, R. Passama, B. Navarro, M. Sorour, A. Khelloufi, O. Mazhar, S. Tarbouriech, J. Zhu, O. Tempier, A. Crosnier, et al., “A collaborative robot for the factory of the future: Bazar,” The International Journal of Advanced Manufacturing Technology, vol. 105, no. 9, pp. 3643–3659, 2019.
  5. J. R. Davidson, C. J. Hohimer, C. Mo, and M. Karkee, “Dual robot coordination for apple harvesting,” in 2017 ASABE annual international meeting.   American Society of Agricultural and Biological Engineers, 2017, p. 1.
  6. D. SepúLveda, R. Fernández, E. Navas, M. Armada, and P. González-De-Santos, “Robotic aubergine harvesting using dual-arm manipulation,” IEEE Access, vol. 8, pp. 121 889–121 904, 2020.
  7. S. Mehta and T. Burks, “Vision-based control of robotic manipulator for citrus harvesting,” Computers and Electronics in Agriculture, vol. 102, pp. 146–158, 2014.
  8. E. J. Van Henten, J. Hemming, B. Van Tuijl, J. Kornet, J. Meuleman, J. Bontsema, and E. Van Os, “An autonomous robot for harvesting cucumbers in greenhouses,” Autonomous robots, vol. 13, no. 3, pp. 241–258, 2002.
  9. X. Ling, Y. Zhao, L. Gong, C. Liu, and T. Wang, “Dual-arm cooperation and implementing for robotic harvesting tomato using binocular vision,” Robotics and Autonomous Systems, vol. 114, pp. 134–143, 2019.
  10. S. Hayashi, K. Shigematsu, S. Yamamoto, K. Kobayashi, Y. Kohno, J. Kamata, and M. Kurita, “Evaluation of a strawberry-harvesting robot in a field test,” Biosystems engineering, vol. 105, no. 2, pp. 160–171, 2010.
  11. C. Lehnert, A. English, C. McCool, A. W. Tow, and T. Perez, “Autonomous sweet pepper harvesting for protected cropping systems,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 872–879, 2017.
  12. T. T. Santos, L. L. de Souza, A. A. dos Santos, and S. Avila, “Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association,” Computers and Electronics in Agriculture, vol. 170, p. 105247, 2020.
  13. H. Kang and C. Chen, “Fast implementation of real-time fruit detection in apple orchards using deep learning,” Computers and Electronics in Agriculture, vol. 168, p. 105108, 2020.
  14. A. Koirala, K. Walsh, Z. Wang, and C. McCarthy, “Deep learning for real-time fruit detection and orchard fruit load estimation: Benchmarking of ‘mangoyolo’,” Precision Agriculture, vol. 20, no. 6, pp. 1107–1135, 2019.
  15. Y.-P. Liu, C.-H. Yang, H. Ling, S. Mabu, and T. Kuremoto, “A visual system of citrus picking robot using convolutional neural networks,” in 2018 5th International Conference on Systems and Informatics (ICSAI).   IEEE, 2018, pp. 344–349.
  16. R. Kirk, G. Cielniak, and M. Mangan, “L* a* b* fruits: A rapid and robust outdoor fruit detection system combining bio-inspired features with one-stage deep learning networks,” Sensors, vol. 20, no. 1, p. 275, 2020.
  17. P. Ganesh, K. Volle, T. Burks, and S. Mehta, “Deep orange: Mask r-cnn based orange detection and segmentation,” IFAC-PapersOnLine, vol. 52, no. 30, pp. 70–75, 2019.
  18. P. Roy and V. Isler, “Surveying apple orchards with a monocular vision system,” in 2016 IEEE International Conference on Automation Science and Engineering (CASE).   IEEE, 2016, pp. 916–921.
  19. J. Das, G. Cross, C. Qu, A. Makineni, P. Tokekar, Y. Mulgaonkar, and V. Kumar, “Devices, systems, and methods for automated monitoring enabling precision agriculture,” in 2015 IEEE International Conference on Automation Science and Engineering (CASE).   IEEE, 2015, pp. 462–469.
  20. G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, NanoCode012, Y. Kwon, TaoXie, K. Michael, J. Fang, imyhxy, Lorna, C. Wong, Z. Yifu, A. V, D. Montes, Z. Wang, C. Fati, J. Nadar, Laughing, UnglvKitDe, tkianai, yxNONG, P. Skalski, A. Hogan, M. Strobel, M. Jain, L. Mammana, and xylieong, “ultralytics/yolov5: v6.2 - YOLOv5 Classification Models, Apple M1, Reproducibility, ClearML and Deci.ai integrations,” Aug. 2022. [Online]. Available: https://doi.org/10.5281/zenodo.7002879
  21. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision (ECCV).   Springer, 2014, pp. 740–755.
  22. P. Pawara, E. Okafor, O. Surinta, L. Schomaker, and M. Wiering, “Comparing local descriptors and bags of visual words to deep convolutional neural networks for plant recognition,” in International Conference on Pattern Recognition Applications and Methods (ICPRAM), vol. 2.   SciTePress, 2017, pp. 479–486.
  23. N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in 2017 IEEE International Conference on Image Processing (ICIP).   IEEE, 2017, pp. 3645–3649.
  24. A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and realtime tracking,” in 2016 IEEE International Conference on Image Processing (ICIP).   IEEE, 2016, pp. 3464–3468.
  25. B. V. Adorno, P. Fraisse, and S. Druon, “Dual position control strategies using the cooperative dual task-space framework,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2010, pp. 3955–3960.
  26. S. Tarbouriech, B. Navarro, P. Fraisse, A. Crosnier, A. Cherubini, and D. Sallé, “An admittance based hierarchical control framework for dual-arm cobots,” Mechatronics, vol. 86, p. 102814, 2022.
  27. B. Faverjon and P. Tournassoud, “A local based approach for path planning of manipulators with a high number of degrees of freedom,” in Proceedings. 1987 IEEE International Conference on Robotics and Automation (ICRA), vol. 4.   IEEE, 1987, pp. 1152–1159.
  28. K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-identification,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 3702–3712.
Citations (2)

Summary

We haven't generated a summary for this paper yet.