Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semantics-Aware Next-best-view Planning for Efficient Search and Detection of Task-relevant Plant Parts (2306.09801v3)

Published 16 Jun 2023 in cs.RO and cs.CV

Abstract: Searching and detecting the task-relevant parts of plants is important to automate harvesting and de-leafing of tomato plants using robots. This is challenging due to high levels of occlusion in tomato plants. Active vision is a promising approach in which the robot strategically plans its camera viewpoints to overcome occlusion and improve perception accuracy. However, current active-vision algorithms cannot differentiate between relevant and irrelevant plant parts and spend time on perceiving irrelevant plant parts. This work proposed a semantics-aware active-vision strategy that uses semantic information to identify the relevant plant parts and prioritise them during view planning. The proposed strategy was evaluated on the task of searching and detecting the relevant plant parts using simulation and real-world experiments. In simulation experiments, the semantics-aware strategy proposed could search and detect 81.8% of the relevant plant parts using nine viewpoints. It was significantly faster and detected more plant parts than predefined, random, and volumetric active-vision strategies that do not use semantic information. The strategy proposed was also robust to uncertainty in plant and plant-part positions, plant complexity, and different viewpoint-sampling strategies. In real-world experiments, the semantics-aware strategy could search and detect 82.7% of the relevant plant parts using seven viewpoints, under complex greenhouse conditions with natural variation and occlusion, natural illumination, sensor noise, and uncertainty in camera poses. The results of this work clearly indicate the advantage of using semantics-aware active vision for targeted perception of plant parts and its applicability in the real world. It can significantly improve the efficiency of automated harvesting and de-leafing in tomato crop production.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. J. Rigg, M. Phongsiri, B. Promphakping, A. Salamanca, and M. Sripun, “Who will tend the farm? interrogating the ageing asian farmer,” The Journal of Peasant Studies, vol. 47, no. 2, pp. 306–325, 2020.
  2. E. Van Henten, “Greenhouse mechanization: state of the art and future perspective,” in International Symposium on Greenhouses, Environmental Controls and In-house Mechanization for Crop Production in the Tropics 710, 2004, pp. 55–70.
  3. A. Koirala, K. B. Walsh, Z. Wang, and C. McCarthy, “Deep learning–method overview and review of use for fruit detection and yield estimation,” Computers and electronics in agriculture, vol. 162, pp. 219–234, 2019.
  4. G. Kootstra, X. Wang, P. M. Blok, J. Hemming, and E. Van Henten, “Selective harvesting robotics: current research, trends, and future directions,” Current Robotics Reports, vol. 2, no. 1, pp. 95–104, 2021.
  5. E. J. Van Henten, J. Hemming, B. Van Tuijl, J. Kornet, J. Meuleman, J. Bontsema, and E. Van Os, “An autonomous robot for harvesting cucumbers in greenhouses,” Autonomous robots, vol. 13, no. 3, pp. 241–258, 2002.
  6. J. Hemming, J. Ruizendaal, J. W. Hofstee, and E. J. Van Henten, “Fruit detectability analysis for different camera positions in sweet-pepper,” Sensors, vol. 14, no. 4, pp. 6032–6044, 2014.
  7. S. Isler, R. Sabzevari, J. Delmerico, and D. Scaramuzza, “An information gain formulation for active volumetric 3d reconstruction,” in 2016 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 3477–3484.
  8. A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart, “Receding horizon” next-best-view” planner for 3d exploration,” in 2016 IEEE international conference on robotics and automation (ICRA).   IEEE, 2016, pp. 1462–1468.
  9. J. Daudelin and M. Campbell, “An adaptable, probabilistic, next-best view algorithm for reconstruction of unknown 3-d objects,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1540–1547, 2017.
  10. L. Schmid, M. Pantic, R. Khanna, L. Ott, R. Siegwart, and J. Nieto, “An efficient sampling-based method for online informative path planning in unknown environments,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1500–1507, 2020.
  11. J. A. Gibbs, M. Pound, A. French, D. Wells, E. Murchie, and T. Pridmore, “Active vision and surface reconstruction for 3d plant shoot modelling,” IEEE/ACM transactions on computational biology and bioinformatics, 2019.
  12. T. Zaenker, C. Smitt, C. McCool, and M. Bennewitz, “Viewpoint planning for fruit size and position estimation,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 3271–3277.
  13. S. Marangoz, T. Zaenker, R. Menon, and M. Bennewitz, “Fruit mapping with shape completion for autonomous crop monitoring,” in 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), 2022, pp. 471–476.
  14. A. K. Burusa, E. J. van Henten, and G. Kootstra, “Attention-driven active vision for efficient reconstruction of plants and targeted plant parts,” arXiv preprint arXiv:2206.10274, 2022.
  15. S. A. Kay, S. Julier, and V. M. Pawar, “Semantically informed next best view planning for autonomous aerial 3d reconstruction,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 3125–3130.
  16. L. Zheng, C. Zhu, J. Zhang, H. Zhao, H. Huang, M. Niessner, and K. Xu, “Active scene understanding via online semantic reconstruction,” in Computer Graphics Forum, vol. 38, no. 7.   Wiley Online Library, 2019, pp. 103–114.
  17. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
  18. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous robots, vol. 34, no. 3, pp. 189–206, 2013.
  19. Z. Xuan and F. David, “Real-time voxel based 3d semantic mapping with a hand held rgb-d camera,” 2018. [Online]. Available: https://github.com/floatlazer/semantic_slam
  20. M. Ankerst, M. M. Breunig, H.-P. Kriegel, and J. Sander, “Optics: Ordering points to identify the clustering structure,” ACM Sigmod record, vol. 28, no. 2, pp. 49–60, 1999.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com