VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs (2407.01863v1)
Abstract: Vision LLMs (VLMs) are an exciting emerging class of LLMs (LMs) that have merged classic LM capabilities with those of image processing systems. However, the ways that these capabilities combine are not always intuitive and warrant direct investigation. One understudied capability in VLMs is visual spatial planning -- the ability to comprehend the spatial arrangements of objects and devise action plans to achieve desired outcomes in visual scenes. In our study, we introduce VSP, a benchmark that 1) evaluates the spatial planning capability in these models in general, and 2) breaks down the visual planning task into finer-grained sub-tasks, including perception and reasoning, and measure the LMs capabilities in these sub-tasks. Our evaluation shows that both open-source and private VLMs fail to generate effective plans for even simple spatial planning tasks. Evaluations on the fine-grained analytical tasks further reveal fundamental deficiencies in the models' visual perception and bottlenecks in reasoning abilities, explaining their worse performance in the general spatial planning tasks. Our work illuminates future directions for improving VLMs' abilities in spatial planning. Our benchmark is publicly available at https://github.com/UCSB-NLP-Chang/Visual-Spatial-Planning.
- Qiucheng Wu (7 papers)
- Handong Zhao (38 papers)
- Michael Saxon (27 papers)
- Trung Bui (79 papers)
- William Yang Wang (254 papers)
- Yang Zhang (1129 papers)
- Shiyu Chang (120 papers)