Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs (2407.01863v1)

Published 2 Jul 2024 in cs.CL

Abstract: Vision LLMs (VLMs) are an exciting emerging class of LLMs (LMs) that have merged classic LM capabilities with those of image processing systems. However, the ways that these capabilities combine are not always intuitive and warrant direct investigation. One understudied capability in VLMs is visual spatial planning -- the ability to comprehend the spatial arrangements of objects and devise action plans to achieve desired outcomes in visual scenes. In our study, we introduce VSP, a benchmark that 1) evaluates the spatial planning capability in these models in general, and 2) breaks down the visual planning task into finer-grained sub-tasks, including perception and reasoning, and measure the LMs capabilities in these sub-tasks. Our evaluation shows that both open-source and private VLMs fail to generate effective plans for even simple spatial planning tasks. Evaluations on the fine-grained analytical tasks further reveal fundamental deficiencies in the models' visual perception and bottlenecks in reasoning abilities, explaining their worse performance in the general spatial planning tasks. Our work illuminates future directions for improving VLMs' abilities in spatial planning. Our benchmark is publicly available at https://github.com/UCSB-NLP-Chang/Visual-Spatial-Planning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qiucheng Wu (7 papers)
  2. Handong Zhao (38 papers)
  3. Michael Saxon (27 papers)
  4. Trung Bui (79 papers)
  5. William Yang Wang (254 papers)
  6. Yang Zhang (1129 papers)
  7. Shiyu Chang (120 papers)
Citations (3)