Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Direct learning of home vector direction for insect-inspired robot navigation (2405.03827v1)

Published 6 May 2024 in cs.RO and cs.CV

Abstract: Insects have long been recognized for their ability to navigate and return home using visual cues from their nest's environment. However, the precise mechanism underlying this remarkable homing skill remains a subject of ongoing investigation. Drawing inspiration from the learning flights of honey bees and wasps, we propose a robot navigation method that directly learns the home vector direction from visual percepts during a learning flight in the vicinity of the nest. After learning, the robot will travel away from the nest, come back by means of odometry, and eliminate the resultant drift by inferring the home vector orientation from the currently experienced view. Using a compact convolutional neural network, we demonstrate successful learning in both simulated and real forest environments, as well as successful homing control of a simulated quadrotor. The average errors of the inferred home vectors in general stay well below the 90{\deg} required for successful homing, and below 24{\deg} if all images contain sufficient texture and illumination. Moreover, we show that the trajectory followed during the initial learning flight has a pronounced impact on the network's performance. A higher density of sample points in proximity to the nest results in a more consistent return. Code and data are available at https://mavlab.tudelft.nl/learning_to_home .

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. T. Haferlach, J. Wessnitzer, M. Mangan, and B. Webb, “Evolving a Neural Model of Insect Path Integration,” Adaptive Behavior, vol. 15, pp. 273–287, 2007.
  2. M. V. Srinivasan, S. W. Zhang, M. Lehrer, and T. S. Collett, “Honeybee Navigation En Route to the Goal: Visual Flight Control and Odometry,” Journal of Experimental Biology, vol. 199, pp. 237–244, 1996.
  3. E. A. Capaldi, A. D. Smith, J. L. Osborne, S. E. Fahrbach, S. M. Farris, D. R. Reynolds, A. S. Edwards, A. Martin, G. E. Robinson, G. M. Poppy, and J. R. Riley, “Ontogeny of orientation flight in the honeybee revealed by harmonic radar,” Nature, vol. 403, pp. 537–540, 2000.
  4. J. Zeil, “Visual homing: An insect perspective,” Current Opinion in Neurobiology, vol. 22, pp. 285–293, 2012.
  5. B. A. Cartwright and T. S. Collett, “Landmark learning in bees,” Journal of comparative physiology, vol. 151, pp. 521–543, 1983.
  6. D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, and R. Wehner, “A mobile robot employing insect strategies for navigation,” Robotics and Autonomous Systems, vol. 30, pp. 39–64, 2000.
  7. V. V. Hafner and R. Möller, “Learning of Visual Navigation Strategies,” in Proceedings of the European Workshop on Learning Robots, vol. 9, 2001, pp. 47–56.
  8. B. Baddeley, P. Graham, P. Husbands, and A. Philippides, “A Model of Ant Route Navigation Driven by Scene Familiarity,” PLOS Computational Biology, vol. 8, p. e1002336, 2012.
  9. F. Le Möel and A. Wystrach, “Opponent processes in visual memories: A model of attraction and repulsion in navigating insects’ mushroom bodies,” PLOS Computational Biology, vol. 16, p. e1007631, 2020.
  10. G. Gattaux, R. Vimbert, A. Wystrach, J. R. Serres, and F. Ruffier, “Antcar: Simple Route Following Task with Ants-Inspired Vision and Neural Model,” 2023.
  11. R. Wehner and S. Wehner, “Insect navigation: Use of maps or Ariadne’s thread?” Ethology Ecology & Evolution, vol. 2, pp. 27–48, 1990.
  12. B. Webb, “The internal maps of insects,” Journal of Experimental Biology, vol. 222, p. jeb188094, 2019.
  13. J. Degen, A. Kirbach, L. Reiter, K. Lehmann, P. Norton, M. Storms, M. Koblofsky, S. Winter, P. B. Georgieva, H. Nguyen, H. Chamkhi, U. Greggers, and R. Menzel, “Exploratory behaviour of honeybees during orientation flights,” Animal Behaviour, vol. 102, pp. 45–57, 2015.
  14. G. J. J. van Dalen, K. N. McGuire, and G. C. H. E. de Croon, “Visual Homing for Micro Aerial Vehicles Using Scene Familiarity,” Unmanned Systems, vol. 06, pp. 119–130, 2018.
  15. Y. Song, S. Naji, E. Kaufmann, A. Loquercio, and D. Scaramuzza, “Flightmare: A Flexible Quadrotor Simulator,” in Proceedings of the 2020 Conference on Robot Learning.   PMLR, 2021, pp. 1147–1157.
  16. B. Berenguel-Baeta, J. Bermudez-Cameo, and J. J. Guerrero, “OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision,” Sensors, vol. 20, p. 2066, 2020.
  17. J. L. Gould, J. L. Kirschvink, and K. S. Deffeyes, “Bees Have Magnetic Remanence,” Science, vol. 201, pp. 1026–1028, 1978.
  18. S. Rossel and R. Wehner, “Polarization vision in bees,” Nature, vol. 323, pp. 128–131, 1986.
  19. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626.
  20. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, vol. 114, pp. 3521–3526, 2017.
  21. E. Kruzhkov, A. Savinykh, P. Karpyshev, M. Kurenkov, E. Yudin, A. Potapov, and D. Tsetserukou, “MeSLAM: Memory Efficient SLAM based on Neural Fields,” in 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2022, pp. 430–435.
  22. A. Kendall, M. Grimes, and R. Cipolla, “PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2938–2946.
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.