Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DKPROMPT: Domain Knowledge Prompting Vision-Language Models for Open-World Planning (2406.17659v1)

Published 25 Jun 2024 in cs.AI and cs.RO

Abstract: Vision-LLMs (VLMs) have been applied to robot task planning problems, where the robot receives a task in natural language and generates plans based on visual inputs. While current VLMs have demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory in planning tasks. At the same time, although classical task planners, such as PDDL-based, are strong in planning for long-horizon tasks, they do not work well in open worlds where unforeseen situations are common. In this paper, we propose a novel task planning and execution framework, called DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for classical planning in open worlds. Results from quantitative experiments show that DKPROMPT outperforms classical planning, pure VLM-based and a few other competitive baselines in task completion rate.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xiaohan Zhang (78 papers)
  2. Zainab Altaweel (2 papers)
  3. Yohei Hayamizu (5 papers)
  4. Yan Ding (40 papers)
  5. Saeid Amiri (14 papers)
  6. Hao Yang (328 papers)
  7. Andy Kaminski (3 papers)
  8. Chad Esselink (5 papers)
  9. Shiqi Zhang (88 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com