Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 74 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 98 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Claude Sonnet 4 40 tok/s Pro
2000 character limit reached

ZeroCAP: Zero-Shot Multi-Robot Context Aware Pattern Formation via Large Language Models (2404.02318v3)

Published 2 Apr 2024 in cs.RO

Abstract: Incorporating language comprehension into robotic operations unlocks significant advancements in robotics, but also presents distinct challenges, particularly in executing spatially oriented tasks like pattern formation. This paper introduces ZeroCAP, a novel system that integrates LLMs with multi-robot systems for zero-shot context aware pattern formation. Grounded in the principles of language-conditioned robotics, ZeroCAP leverages the interpretative power of LLMs to translate natural language instructions into actionable robotic configurations. This approach combines the synergy of vision-LLMs, cutting-edge segmentation techniques and shape descriptors, enabling the realization of complex, context-driven pattern formations in the realm of multi robot coordination. Through extensive experiments, we demonstrate the systems proficiency in executing complex context aware pattern formations across a spectrum of tasks, from surrounding and caging objects to infilling regions. This not only validates the system's capability to interpret and implement intricate context-driven tasks but also underscores its adaptability and effectiveness across varied environments and scenarios. The experimental videos and additional information about this work can be found at https://sites.google.com/view/zerocap/home.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. K.-K. Oh, M.-C. Park, and H.-S. Ahn, “A survey of multi-agent formation control,” Automatica, vol. 53, pp. 424–440, 2015.
  2. W. Xu and D. W. Ho, “Clustered event-triggered consensus analysis: An impulsive framework,” IEEE Transactions on Industrial Electronics, vol. 63, no. 11, pp. 7133–7143, 2016.
  3. J. Alonso-Mora, A. Breitenmoser, M. Rufli, R. Siegwart, and P. Beardsley, “Multi-robot system for artistic pattern formation,” in 2011 IEEE International Conference on Robotics and Automation, 2011, pp. 4512–4517.
  4. R. Gayle, W. Moss, M. C. Lin, and D. Manocha, “Multi-robot coordination using generalized social potential fields,” in 2009 IEEE International Conference on Robotics and Automation.   IEEE, 2009, pp. 106–113.
  5. Y. Lin, M. Chen, W. Wang, B. Wu, K. Li, B. Lin, H. Liu, and X. He, “Clip is also an efficient segmenter: A text-driven approach for weakly supervised semantic segmentation,” 2023.
  6. J. Wang, J. Cao, M. Stojmenovic, M. Zhao, J. Chen, and S. Jiang, “Pattern-rl: Multi-robot cooperative pattern formation via deep reinforcement learning,” in 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), 2019, pp. 210–215.
  7. B. S. Prasad, A. G. Manjunath, and H. Ramasangu, “Multi-agent polygon formation using reinforcement learning.” in ICAART (1), 2017, pp. 159–165.
  8. J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, and T. Funkhouser, “Tidybot: Personalized robot assistance with large language models,” Autonomous Robots, vol. 47, no. 8, pp. 1087–1102, 2023.
  9. I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg, “Progprompt: Generating situated robot task plans using large language models,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 11 523–11 530.
  10. S. Mirchandani, F. Xia, P. Florence, B. Ichter, D. Driess, M. G. Arenas, K. Rao, D. Sadigh, and A. Zeng, “Large language models as general pattern machines,” arXiv preprint arXiv:2307.04721, 2023.
  11. C. Belta and V. Kumar, “Trajectory design for formations of robots by kinetic energy shaping,” in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), vol. 3.   IEEE, 2002, pp. 2593–2598.
  12. V. Indelman, “Towards cooperative multi-robot belief space planning in unknown environments,” Robotics Research: Volume 1, pp. 441–457, 2018.
  13. Y. Cao, W. Yu, W. Ren, and G. Chen, “An overview of recent progress in the study of distributed multi-agent coordination,” IEEE Transactions on Industrial informatics, vol. 9, no. 1, pp. 427–438, 2012.
  14. R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007.
  15. Y. Hong, G. Chen, and L. Bushnell, “Distributed observers design for leader-following control of multi-agent networks,” Automatica, vol. 44, no. 3, pp. 846–850, 2008.
  16. J. Liu, P. Li, W. Chen, K. Qin, and L. Qi, “Distributed formation control of fractional-order multi-agent systems with relative damping and nonuniform time-delays,” ISA transactions, vol. 93, pp. 189–198, 2019.
  17. C. Belta and V. Kumar, “Abstraction and control for groups of robots,” IEEE Transactions on robotics, vol. 20, no. 5, pp. 865–875, 2004.
  18. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  19. A. Madaan, S. Zhou, U. Alon, Y. Yang, and G. Neubig, “Language models of code are few-shot commonsense learners,” arXiv preprint arXiv:2210.07128, 2022.
  20. K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg, “Text2motion: From natural language instructions to feasible plans,” Autonomous Robots, vol. 47, no. 8, pp. 1345–1365, 2023.
  21. S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, “Chatgpt for robotics: Design principles and model abilities,” 2023.
  22. Z. Liu, W. Yao, J. Zhang, L. Xue, S. Heinecke, R. Murthy, Y. Feng, Z. Chen, J. C. Niebles, D. Arpit, R. Xu, P. Mui, H. Wang, C. Xiong, and S. Savarese, “Bolaa: Benchmarking and orchestrating llm-augmented autonomous agents,” 2023.
  23. S. Hong, M. Zhuge, J. Chen, X. Zheng, Y. Cheng, C. Zhang, J. Wang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, L. Xiao, C. Wu, and J. Schmidhuber, “Metagpt: Meta programming for a multi-agent collaborative framework,” 2023.
  24. S. S. Kannan, V. L. N. Venkatesh, and B.-C. Min, “Smart-llm: Smart multi-agent robot task planning using large language models,” 2023.
  25. D. Shah, B. Osiński, b. ichter, and S. Levine, “Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action,” in Proceedings of The 6th Conference on Robot Learning, ser. Proceedings of Machine Learning Research, K. Liu, D. Kulic, and J. Ichnowski, Eds., vol. 205.   PMLR, 14–18 Dec 2023, pp. 492–504. [Online]. Available: https://proceedings.mlr.press/v205/shah23b.html
  26. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A. Dubey, C. Finn, et al., “Rt-2: Vision-language-action models transfer web knowledge to robotic control,” arXiv preprint arXiv:2307.15818, 2023.
  27. J. Gao, B. Sarkar, F. Xia, T. Xiao, J. Wu, B. Ichter, A. Majumdar, and D. Sadigh, “Physically grounded vision-language models for robotic manipulation,” arXiv preprint arXiv:2309.02561, 2023.
  28. J. Zhang, J. Huang, S. Jin, and S. Lu, “Vision-language models for vision tasks: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
  29. R. J. Qureshi, J.-Y. Ramel, and H. Cardot, “Graph based shapes representation and recognition,” in Graph-Based Representations in Pattern Recognition: 6th IAPR-TC-15 International Workshop, GbRPR 2007, Alicante, Spain, June 11-13, 2007. Proceedings 6.   Springer, 2007, pp. 49–60.
  30. J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, “Chain-of-thought prompting elicits reasoning in large language models,” 2023.
  31. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. Girshick, “Segment anything,” arXiv:2304.02643, 2023.
  32. S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu, et al., “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” arXiv preprint arXiv:2303.05499, 2023.
  33. OpenAI, “GPT-4V(ision) System Card.” 2023.
  34. H. Liu, C. Li, Q. Wu, and Y. J. Lee, “Visual instruction tuning,” in NeurIPS, 2023.
  35. OpenAI, “GPT-4 Technical Report,” 2023.
  36. H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., “Llama 2: Open Foundation and Fine-Tuned Chat Models,” arXiv preprint arXiv:2307.09288, 2023.
  37. “Anthropic Claude 3.” [Online]. Available: https://www.anthropic.com/news/claude-3-family
  38. A. Lee, W. Jo, S. S. Kannan, and B.-C. Min, “Investigating the effect of deictic movements of a multi-robot,” International Journal of Human–Computer Interaction, vol. 37, no. 3, pp. 197–210, 2021.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.