Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sketch-Plan-Generalize: Learning and Planning with Neuro-Symbolic Programmatic Representations for Inductive Spatial Concepts (2404.07774v3)

Published 11 Apr 2024 in cs.LG and cs.RO

Abstract: Effective human-robot collaboration requires the ability to learn personalized concepts from a limited number of demonstrations, while exhibiting inductive generalization, hierarchical composition, and adaptability to novel constraints. Existing approaches that use code generation capabilities of pre-trained large (vision) LLMs as well as purely neural models show poor generalization to \emph{a-priori} unseen complex concepts. Neuro-symbolic methods (Grand et al., 2023) offer a promising alternative by searching in program space, but face challenges in large program spaces due to the inability to effectively guide the search using demonstrations. Our key insight is to factor inductive concept learning as: (i) {\it Sketch:} detecting and inferring a coarse signature of a new concept (ii) {\it Plan:} performing an MCTS search over grounded action sequences guided by human demonstrations (iii) {\it Generalize:} abstracting out grounded plans as inductive programs. Our pipeline facilitates generalization and modular re-use, enabling continual concept learning. Our approach combines the benefits of code generation ability of LLMs along with grounded neural representations, resulting in neuro-symbolic programs that show stronger inductive generalization on the task of constructing complex structures vis-\'a-vis LLM-only and purely neural approaches. Further, we demonstrate reasoning and planning capabilities with learned concepts for embodied instruction following.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  2. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022.
  3. Grounding spatial relations for outdoor robot navigation. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp.  1976–1982. IEEE, 2015.
  4. Language models are few-shot learners. ArXiv, abs/2005.14165, 2020.
  5. Ramp: A benchmark for evaluating robotic assembly manipulation and planning. IEEE Robotics and Automation Letters, 9(1):9–16, January 2024. ISSN 2377-3774. doi: 10.1109/lra.2023.3330611. URL http://dx.doi.org/10.1109/LRA.2023.3330611.
  6. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.
  7. Learning to infer graphics programs from hand-drawn images. Advances in neural information processing systems, 31, 2018.
  8. A natural language planner interface for mobile manipulators. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pp.  6652–6659. IEEE, 2014.
  9. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022.
  10. Learning neuro-symbolic programs for language guided robot manipulation. 2023.
  11. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
  12. Code as policies: Language model programs for embodied control, 2023.
  13. Structdiffusion: Language-guided creation of physically-valid structures using unseen objects, 2023.
  14. The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision, 2019.
  15. Large language models as general pattern machines, 2023.
  16. Efficient grounding of abstract spatial concepts for natural language interaction with robot platforms. The International Journal of Robotics Research, 37(10):1269–1299, 2018.
  17. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  8494–8502, 2018.
  18. Bayesian inference of temporal task specifications from demonstrations. Advances in Neural Information Processing Systems, 31, 2018a.
  19. Bayesian inference of temporal task specifications from demonstrations. In Neural Information Processing Systems, 2018b. URL https://api.semanticscholar.org/CorpusID:53625998.
  20. Few-shot bayesian imitation learning with logical program policies, 2019.
  21. Progprompt: Generating situated robot task plans using large language models. arXiv preprint arXiv:2209.11302, 2022.
  22. Approaching the symbol grounding problem with probabilistic graphical models. AI magazine, 32(4):64–76, 2011.
  23. How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022):1279–1285, 2011. doi: 10.1126/science.1192788. URL https://www.science.org/doi/abs/10.1126/science.1192788.
  24. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a.
  25. Demo2code: From summarizing demonstrations to synthesizing code via extended chain-of-thought. arXiv preprint arXiv:2305.16744, 2023b.
  26. Programmatically grounded, compositionally generalizable robotic manipulation. arXiv preprint arXiv:2304.13826, 2023c.
  27. Jarvis-1: Open-world multi-task agents with memory-augmented multimodal language models, 2023d.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets