Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 74 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 20 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Robotic Grasping of Harvested Tomato Trusses Using Vision and Online Learning (2309.17170v2)

Published 29 Sep 2023 in cs.RO, cs.AI, and cs.CV

Abstract: Currently, truss tomato weighing and packaging require significant manual work. The main obstacle to automation lies in the difficulty of developing a reliable robotic grasping system for already harvested trusses. We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest. The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem. To this end, we have introduced a grasp pose ranking algorithm with online learning capabilities. After selecting the most promising grasp pose, the robot executes a pinch grasp without needing touch sensors or geometric models. Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile. 93% of the trusses were successfully grasped on the first try, while the remaining 7% required more attempts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. A. Monteiro, S. Santos, and P. Gonçalves, “Precision agriculture for crop and livestock farming—brief review,” Animals, vol. 11, no. 8, 2021.
  2. T. Wang, X. Xu, C. Wang, Z. Li, and D. Li, “From smart farming towards unmanned farms: A new mode of agricultural production,” Agriculture, 2021.
  3. D. Ireri, E. Belal, C. Okinda, N. Makange, and C. Ji, “A computer vision system for defect discrimination and grading in tomatoes using machine learning and image processing,” Artificial Intelligence in Agriculture, vol. 2, pp. 28–37, 2019.
  4. A. Rybakov, A. Marenkov, V. Kuznetsova, and A. Stanishevskaya, “Application of a computer vision system for recognizing tomato fruits and determining their position relative to the gripper device of the harvesting robot,” in Journal of Physics: Conference Series, vol. 2091, p. 012063, IOP Publishing, 2021.
  5. I. Nyalala, C. Okinda, L. Nyalala, N. Makange, Q. Chao, L. Chao, K. Yousaf, and K. Chen, “Tomato volume and mass estimation using computer vision and machine learning algorithms: Cherry tomato model,” Journal of Food Engineering, vol. 263, pp. 288–298, 2019.
  6. S. Kaur, A. Girdhar, and J. Gill, “Computer vision-based tomato grading and sorting,” in Advances in Data and Information Sciences, pp. 75–84, Springer, 2018.
  7. M. Afonso, H. Fonteijn, F. S. Fiorentin, D. Lensink, M. Mooij, N. Faber, G. Polder, and R. Wehrens, “Tomato fruit detection and counting in greenhouses using deep learning,” Frontiers in plant science, p. 1759, 2020.
  8. N. Kounalakis, E. Kalykakis, M. Pettas, A. Makris, M. M. Kavoussanos, M. Sfakiotakis, and J. Fasoulas, “Development of a tomato harvesting robot: Peduncle recognition and approaching,” in 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–6, IEEE, 2021.
  9. J. Kim, H. Pyo, I. Jang, J. Kang, B. Ju, and K. Ko, “Tomato harvesting robotic system based on Deep-ToMaToS: Deep learning network using transformation loss for 6D pose estimation of maturity classified tomatoes with side-stem,” Computers and Electronics in Agriculture, vol. 201, p. 107300, 2022.
  10. Q. Feng, X. Wang, G. Wang, and Z. Li, “Design and test of tomatoes harvesting robot,” in 2015 IEEE International Conference on Information and Automation, pp. 949–952, 2015.
  11. Y. Zhao, L. Gong, Y. Huang, and C. Liu, “Robust tomato recognition for robotic harvesting using feature images fusion,” Sensors, vol. 16, p. 173, 01 2016.
  12. C. S. den Hartog, “Active camera positioning utilizing guarded motion control to obtain a frontal view of a tomato truss enabling ripeness detection,” MSc thesis, Eindhoven University of Technology, Eindhoven, the Netherlands, Apr. 2021.
  13. N. Kondo, K. Yamamoto, H. Shimizu, K. Yata, M. Kurita, T. Shiigi, M. Monta, and T. Nishizu, “A machine vision system for tomato cluster harvesting robot,” Engineering in Agriculture, Environment and Food, vol. 2, no. 2, pp. 60–65, 2009.
  14. F. Zhang, J. Gao, C. Song, H. Zhou, K. Zou, J. Xie, T. Yuan, and J. Zhang, “TPMv2: An end-to-end tomato pose method based on 3D key points detection,” Computers and Electronics in Agriculture, vol. 210, p. 107878, 2023.
  15. F. Zhang, J. Gao, H. Zhou, J. Zhang, K. Zou, and T. Yuan, “Three-dimensional pose detection method based on keypoints detection network for tomato bunch,” Computers and Electronics in Agriculture, vol. 195, p. 106824, 2022.
  16. J. Rong, D. Guanglin, and P. Wang, “A peduncle detection method of tomato for autonomous harvesting,” Complex & Intelligent Systems, vol. 7, 09 2021.
  17. T. de Haan, P. Kulkarni, and R. Babuska, “Geometry-based grasping of vine tomatoes,” 2021.
  18. J. Gray and E. Pekkeriet, Flexible robotic systems for automated adaptive packaging of fresh and processed food products. Report on the deliverable 10.3 The Workshops (September 30th 2016). European Commission, Sept. 2016.
  19. Ultralytics, “YOLOv5: A state-of-the-art real-time object detection system.” https://docs.ultralytics.com, 2021. Accessed: 08-05-2023.
  20. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision – ECCV 2014 (D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds.), (Cham), pp. 740–755, Springer International Publishing, 2014.
  21. C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” 2022.
  22. G. Franzese, A. Mészáros, L. Peternel, and J. Kober, “Ilosa: Interactive learning of stiffness and attractors,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7778–7785, IEEE, 2021.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.