Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Vision-Based Navigation System for Arable Fields

Published 21 Sep 2023 in cs.RO and cs.CV | (2309.11989v2)

Abstract: Vision-based navigation systems in arable fields are an underexplored area in agricultural robot navigation. Vision systems deployed in arable fields face challenges such as fluctuating weed density, varying illumination levels, growth stages and crop row irregularities. Current solutions are often crop-specific and aimed to address limited individual conditions such as illumination or weed density. Moreover, the scarcity of comprehensive datasets hinders the development of generalised machine learning systems for navigating these fields. This paper proposes a suite of deep learning-based perception algorithms using affordable vision sensors for vision-based navigation in arable fields. Initially, a comprehensive dataset that captures the intricacies of multiple crop seasons, various crop types, and a range of field variations was compiled. Next, this study delves into the creation of robust infield perception algorithms capable of accurately detecting crop rows under diverse conditions such as different growth stages, weed density, and varying illumination. Further, it investigates the integration of crop row following with vision-based crop row switching for efficient field-scale navigation. The proposed infield navigation system was tested in commercial arable fields traversing a total distance of 4.5 km with average heading and cross-track errors of 1.24{\deg} and 3.32 cm respectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. S. Bonadies and S. A. Gadsden, “An overview of autonomous crop row navigation strategies for unmanned ground vehicles,” Engineering in Agriculture, Environment and Food, vol. 12, no. 1, pp. 24–31, 2019.
  2. P. Huang, Z. Zhang, and X. Luo, “Feedforward-plus-proportional–integral–derivative controller for agricultural robot turning in headland,” International Journal of Advanced Robotic Systems, vol. 17, no. 1, p. 1729881419897678, 2020.
  3. A. Ahmadi, L. Nardi, N. Chebrolu, and C. Stachniss, “Visual servoing-based navigation for monitoring row-crop fields,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 4920–4926.
  4. S. Kanagasingham, M. Ekpanyapong, and R. Chaihan, “Integrating machine vision-based row guidance with gps and compass-based routing to achieve autonomous navigation for a rice field weeding robot,” Precision Agriculture, vol. 21, no. 4, pp. 831–855, 2020.
  5. R. de Silva, G. Cielniak, G. Wang, and J. Gao, “Deep learning-based crop row detection for infield navigation of agri-robots,” Journal of Field Robotics, 2023.
  6. R. de Silva, G. Cielniak, and J. Gao, “Vision based crop row navigation under varying field conditions in arable fields,” arXiv preprint arXiv:2209.14003, 2022.
  7. T. Wang, B. Chen, Z. Zhang, H. Li, and M. Zhang, “Applications of machine vision in agricultural robot navigation: A review,” Computers and Electronics in Agriculture, vol. 198, p. 107085, 2022.
  8. N. Shalal, T. Low, C. McCarthy, and N. Hancock, “A review of autonomous navigation systems in agricultural environments,” SEAg 2013: Innovative agricultural technologies for a sustainable future, 2013.
  9. Z. Man, J. Yuhan, L. Shichao, C. Ruyue, X. Hongzhen, and Z. Zhenqian, “Research progress of agricultural machinery navigation technology,” Nongye Jixie Xuebao/Transactions of the Chinese Society of Agricultural Machinery, vol. 51, no. 4, 2020.
  10. Y. Bai, B. Zhang, N. Xu, J. Zhou, J. Shi, and Z. Diao, “Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review,” Computers and Electronics in Agriculture, vol. 205, p. 107584, 2023.
  11. Z. He, Y. Bao, Q. Yu, P. Lu, Y. He, and Y. Liu, “Dynamic path planning method for headland turning of unmanned agricultural vehicles,” Computers and Electronics in Agriculture, vol. 206, p. 107699, 2023.
  12. D. S. Paraforos, R. Hübner, and H. W. Griepentrog, “Automatic determination of headland turning from auto-steering position data for minimising the infield non-working time,” Computers and electronics in agriculture, vol. 152, pp. 393–400, 2018.
  13. F. Rovira-Más, I. Chatterjee, and V. Sáiz-Rubio, “The role of gnss in the navigation strategies of cost-effective agricultural robots,” Computers and electronics in Agriculture, vol. 112, pp. 172–183, 2015.
  14. J. Kneip, P. Fleischmann, and K. Berns, “Crop edge detection based on stereo vision,” Robotics and Autonomous Systems, vol. 123, p. 103323, 2020.
  15. A. Ahmadi, M. Halstead, and C. McCool, “Towards autonomous visual navigation in arable fields,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 6585–6592.
  16. E. Liu, J. Monica, K. Gold, L. Cadle-Davidson, D. Combs, and Y. Jiang, “Vision-based vineyard navigation solution with automatic annotation,” arXiv preprint arXiv:2303.14347, 2023.
  17. J. L. Xue and T. E. Grift, “Agricultural robot turning in the headland of corn fields,” Applied Mechanics and Materials, vol. 63, pp. 780–784, 2011.
  18. P. Huang, L. Zhu, Z. Zhang, and C. Yang, “Row end detection and headland turning control for an autonomous banana-picking robot,” Machines, vol. 9, no. 5, p. 103, 2021.
  19. Z. Zhang, R. Cao, C. Peng, R. Liu, Y. Sun, M. Zhang, and H. Li, “Cut-edge detection method for rice harvesting based on machine vision,” Agronomy, vol. 10, no. 4, p. 590, 2020.
  20. L. Guevara, M. M. Michałek, and F. A. Cheein, “Headland turning algorithmization for autonomous n-trailer vehicles in agricultural scenarios,” Computers and Electronics in Agriculture, vol. 175, p. 105541, 2020.
  21. H. Wang and N. Noguchi, “Adaptive turning control for an agricultural robot tractor,” International Journal of Agricultural and Biological Engineering, vol. 11, no. 6, pp. 113–119, 2018.
  22. J. T. Evans IV, S. K. Pitla, J. D. Luck, and M. Kocher, “Row crop grain harvester path optimization in headland patterns,” Computers and electronics in agriculture, vol. 171, p. 105295, 2020.
  23. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, pp. 91–110, 2004.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.