Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Translating Natural Language to Planning Goals with Large-Language Models (2302.05128v1)

Published 10 Feb 2023 in cs.CL, cs.AI, and cs.RO

Abstract: Recent LLMs have demonstrated remarkable performance on a variety of NLP tasks, leading to intense excitement about their applicability across various domains. Unfortunately, recent work has also shown that LLMs are unable to perform accurate reasoning nor solve planning problems, which may limit their usefulness for robotics-related tasks. In this work, our central question is whether LLMs are able to translate goals specified in natural language to a structured planning language. If so, LLM can act as a natural interface between the planner and human users; the translated goal can be handed to domain-independent AI planners that are very effective at planning. Our empirical results on GPT 3.5 variants show that LLMs are much better suited towards translation rather than planning. We find that LLMs are able to leverage commonsense knowledge and reasoning to furnish missing details from under-specified goals (as is often the case in natural language). However, our experiments also reveal that LLMs can fail to generate goals in tasks that involve numerical or physical (e.g., spatial) reasoning, and that LLMs are sensitive to the prompts used. As such, these models are promising for translation to structured planning languages, but care should be taken in their use.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yaqi Xie (23 papers)
  2. Chen Yu (33 papers)
  3. Tongyao Zhu (8 papers)
  4. Jinbin Bai (19 papers)
  5. Ze Gong (6 papers)
  6. Harold Soh (54 papers)
Citations (119)
X Twitter Logo Streamline Icon: https://streamlinehq.com