Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities (2401.12168v1)

Published 22 Jan 2024 in cs.CV, cs.CL, cs.LG, and cs.RO
SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities

Abstract: Understanding and reasoning about spatial relationships is a fundamental capability for Visual Question Answering (VQA) and robotics. While Vision LLMs (VLM) have demonstrated remarkable performance in certain VQA benchmarks, they still lack capabilities in 3D spatial reasoning, such as recognizing quantitative relationships of physical objects like distances or size differences. We hypothesize that VLMs' limited spatial reasoning capability is due to the lack of 3D spatial knowledge in training data and aim to solve this problem by training VLMs with Internet-scale spatial reasoning data. To this end, we present a system to facilitate this approach. We first develop an automatic 3D spatial VQA data generation framework that scales up to 2 billion VQA examples on 10 million real-world images. We then investigate various factors in the training recipe, including data quality, training pipeline, and VLM architecture. Our work features the first internet-scale 3D spatial reasoning dataset in metric space. By training a VLM on such data, we significantly enhance its ability on both qualitative and quantitative spatial VQA. Finally, we demonstrate that this VLM unlocks novel downstream applications in chain-of-thought spatial reasoning and robotics due to its quantitative estimation capability. Project website: https://spatial-vlm.github.io/

Introduction

Vision LLMs (VLMs) have advanced significantly across various tasks including image captioning, visual question answering (VQA), and more. However, state-of-the-art VLMs, such as GPT-4V, exhibit deficiencies in spatial reasoning - understanding the position of objects in 3D space and spatial relationships between them. Proficiency in spatial reasoning extends VLMs’ utility in domains such as robotics or augmented reality (AR). This paper posits that the spatial reasoning limitations of current VLMs are not due to architecture constraints but stem from the lack of 3D spatial knowledge in their training data.

Methodology

To address the gap in 3D spatial reasoning, the researchers present SpatialVLM, a system that generates a substantial dataset for VLM training, leveraging internet-scale data. The paradigm inculcates VLMs with the capability to conduct both qualitative and quantitative spatial reasoning from 2D images. The data synthesis pipeline innovatively employs off-the-shelf computer vision models for object detection, depth estimation, segmentation, and captioning for a large-scale spatial VQA dataset, which translates to training VLMs for direct spatial reasoning abilities. Notably, the SpatialVLM amassed a dataset featuring 10 million images resulting in 2 billion spatial reasoning VQA pairs.

Model Training and Evaluation

SpatialVLM utilizes a variant of PaLM-E architecture for training, dedicating a portion of tokens specifically to spatial reasoning tasks. Comparison with contemporary VLMs highlights the effectiveness of SpatialVLM in spatial reasoning benchmarks. Besides, the paper explores the impacts of synthetic data quality and different training strategies on model learning. Notable findings suggest that VLMs can benefit from spatial VQA supervisions without compromising general VQA capabilities and that unfreezing the vision transformer (ViT) encoder is essential for fine-grained distance estimation. Moreover, despite noise in training data, SpatialVLM manages to learn generalizable spatial estimations.

Applications and Contributions

SpatialVLM stands out by functioning as an open-vocabulary, dense reward annotator for robotic tasks, showcasing the practical utility of spatial-aware VLMs. Furthermore, when coupled with a powerful LLM, SpatialVLM facilitates complex chain-of-thought spatial reasoning, elucidating the potential of such models to comprehend and execute multiple-step reasoning tasks. The main contributions of this work are notable, advancing quantitative spatial reasoning capability in VLMs and unveiling a framework for generating an extensive 3D spatial reasoning dataset anchored in real-world imagery. The paper indeed presents SpatialVLM as a front-runner in fostering VLMs for intricate reasoning and robotics applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Boyuan Chen (75 papers)
  2. Zhuo Xu (82 papers)
  3. Sean Kirmani (18 papers)
  4. Brian Ichter (52 papers)
  5. Danny Driess (35 papers)
  6. Pete Florence (33 papers)
  7. Dorsa Sadigh (162 papers)
  8. Leonidas Guibas (177 papers)
  9. Fei Xia (111 papers)
Citations (88)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com