Empowering Robot Path Planning with Large Language Models: osmAG Map Topology & Hierarchy Comprehension with LLMs (2403.08228v3)
Abstract: LLMs have demonstrated great potential in robotic applications by providing essential general knowledge. Mobile robots rely on map comprehension for tasks like localization and navigation. In this paper, we explore enabling LLMs to comprehend the topology and hierarchy of Area Graph, a text-based hierarchical, topometric semantic map representation utilizing polygons to demark areas such as rooms or buildings. Our experiments demonstrate that with the right map representation, LLMs can effectively comprehend Area Graph's topology and hierarchy. After straightforward fine-tuning, the LLaMA2 models exceeded ChatGPT-3.5 in mastering these aspects. Our dataset, dataset generation code, fine-tuned LoRA adapters can be accessed at https://github.com/xiefujing/LLM-osmAG-Comprehension.
- OpenAI, “OpenAI: Introducing ChatGPT,” https://openai.com/blog/chatgpt, 2022.
- H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
- S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, “Chatgpt for robotics: Design principles and model abilities,” Microsoft Auton. Syst. Robot. Res, vol. 2, p. 20, 2023.
- J. Hou, Y. Yuan, and S. Schwertfeger, “Area graph: Generation of topological maps using the voronoi diagram,” in 2019 19th International Conference on Advanced Robotics (ICAR). IEEE, 2019, pp. 509–515.
- J. Hou, Y. Yuan, Z. He, and S. Schwertfeger, “Matching maps based on the area graph,” Intelligent Service Robotics, 2022.
- Z. He, H. Sun, J. Hou, Y. Ha, and S. Schwertfeger, “Hierarchical topometric representation of 3d robotic maps,” Autonomous Robots, vol. 45, no. 5, pp. 755–771, 2021.
- D. Feng, C. Li, Y. Zhang, C. Yu, and S. Schwertfeger, “osmag: Hierarchical semantic topometric area graph maps in the osm format for mobile robotics,” in 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2023, https://arxiv.org/pdf/2309.04791.pdf.
- F. Xie and S. Schwertfeger, “Robust lifelong indoor lidar localization using the area graph,” IEEE Robotics and Automation Letters, vol. 9, no. 1, pp. 531–538, 2023.
- M. Haklay and P. Weber, “Openstreetmap: User-generated street maps,” IEEE Pervasive computing, vol. 7, no. 4, pp. 12–18, 2008.
- A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, et al., “Solving quantitative reasoning problems with language models,” Advances in Neural Information Processing Systems, vol. 35, pp. 3843–3857, 2022.
- R. T. McCoy, S. Yao, D. Friedman, M. Hardy, and T. L. Griffiths, “Embers of autoregression: Understanding large language models through the problem they are trained to solve,” arXiv preprint arXiv:2309.13638, 2023.
- D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al., “Palm-e: An embodied multimodal language model,” arXiv preprint arXiv:2303.03378, 2023.
- A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A. Dubey, C. Finn, et al., “Rt-2: Vision-language-action models transfer web knowledge to robotic control,” arXiv preprint arXiv:2307.15818, 2023.
- Y. Ding, X. Zhang, C. Paxton, and S. Zhang, “Task and motion planning with large language models for object rearrangement,” arXiv preprint arXiv:2303.06247, 2023.
- P. Sharma, B. Sundaralingam, V. Blukis, C. Paxton, T. Hermans, A. Torralba, J. Andreas, and D. Fox, “Correcting robot plans with natural language feedback,” arXiv preprint arXiv:2204.05186, 2022.
- A. Bucker, L. Figueredo, S. Haddadin, A. Kapoor, S. Ma, S. Vemprala, and R. Bonatti, “Latte: Language trajectory transformer,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 7287–7294.
- X. Wang, W. Wang, J. Shao, and Y. Yang, “Lana: A language-capable navigator for instruction following and generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 19 048–19 058.
- P. Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V. Koltun, J. Kosecka, J. Malik, R. Mottaghi, M. Savva, et al., “On evaluation of embodied navigation agents,” arXiv preprint arXiv:1807.06757, 2018.
- C. Huang, O. Mees, A. Zeng, and W. Burgard, “Visual language maps for robot navigation,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 10 608–10 615.
- Y. Hong, H. Zhen, P. Chen, S. Zheng, Y. Du, Z. Chen, and C. Gan, “3d-llm: Injecting the 3d world into large language models,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- K. M. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, A. Maalouf, S. Li, G. Iyer, S. Saryazdi, N. Keetha, et al., “Conceptfusion: Open-set multimodal 3d mapping,” arXiv preprint arXiv:2302.07241, 2023.
- D. Shah, B. Osiński, S. Levine, et al., “Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action,” in Conference on Robot Learning. PMLR, 2023, pp. 492–504.
- G. Kim, T. Kim, S. S. Kannan, V. L. Venkatesh, D. Kim, and B.-C. Min, “Dynacon: Dynamic robot planner with contextual awareness via llms,” arXiv preprint arXiv:2309.16031, 2023.
- S. Chen, P.-L. Guhur, M. Tapaswi, C. Schmid, and I. Laptev, “Think global, act local: Dual-scale graph transformer for vision-and-language navigation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 537–16 547.
- E. Unlu, “Chatmap: Large language model interaction with cartographic data,” arXiv preprint arXiv:2310.01429, 2023.
- M. M. Katsakioris, I. Konstas, P. Y. Mignotte, and H. Hastie, “Learning to read maps: Understanding natural language instructions from unseen maps,” in Proceedings of Second International Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics, 2021, pp. 11–21.
- Z. Li and H. Ning, “Autonomous gis: the next-generation ai-powered gis,” arXiv preprint arXiv:2305.06453, 2023.
- E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “Lora: Low-rank adaptation of large language models,” arXiv preprint arXiv:2106.09685, 2021.
- Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, and Z. Sui, “A survey for in-context learning,” arXiv preprint arXiv:2301.00234, 2022.
- Y. Zhai, S. Tong, X. Li, M. Cai, Q. Qu, Y. J. Lee, and Y. Ma, “Investigating the catastrophic forgetting in multimodal large language models,” arXiv preprint arXiv:2309.10313, 2023.
- hiyouga, “Llama factory,” https://github.com/hiyouga/LLaMA-Factory, 2023.
- J. Rasley, S. Rajbhandari, O. Ruwase, and Y. He, “Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 3505–3506.
- S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He, “Zero: Memory optimizations toward training trillion parameter models,” in SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2020, pp. 1–16.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.