Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language-Image Models with 3D Understanding (2405.03685v1)

Published 6 May 2024 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Multi-modal LLMs (MLLMs) have shown incredible capabilities in a variety of 2D vision and language tasks. We extend MLLMs' perceptual capabilities to ground and reason about images in 3-dimensional space. To that end, we first develop a large-scale pre-training dataset for 2D and 3D called LV3D by combining multiple existing 2D and 3D recognition datasets under a common task formulation: as multi-turn question-answering. Next, we introduce a new MLLM named Cube-LLM and pre-train it on LV3D. We show that pure data scaling makes a strong 3D perception capability without 3D specific architectural design or training objective. Cube-LLM exhibits intriguing properties similar to LLMs: (1) Cube-LLM can apply chain-of-thought prompting to improve 3D understanding from 2D context information. (2) Cube-LLM can follow complex and diverse instructions and adapt to versatile input and output formats. (3) Cube-LLM can be visually prompted such as 2D box or a set of candidate 3D boxes from specialists. Our experiments on outdoor benchmarks demonstrate that Cube-LLM significantly outperforms existing baselines by 21.3 points of AP-BEV on the Talk2Car dataset for 3D grounded reasoning and 17.7 points on the DriveLM dataset for complex reasoning about driving scenarios, respectively. Cube-LLM also shows competitive results in general MLLM benchmarks such as refCOCO for 2D grounding with (87.0) average score, as well as visual question answering benchmarks such as VQAv2, GQA, SQA, POPE, etc. for complex reasoning. Our project is available at https://janghyuncho.github.io/Cube-LLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Jang Hyun Cho (9 papers)
  2. Boris Ivanovic (62 papers)
  3. Yulong Cao (26 papers)
  4. Edward Schmerling (46 papers)
  5. Yue Wang (675 papers)
  6. Xinshuo Weng (42 papers)
  7. Boyi Li (39 papers)
  8. Yurong You (28 papers)
  9. Philipp Krähenbühl (55 papers)
  10. Yan Wang (733 papers)
  11. Marco Pavone (314 papers)
Citations (12)
X Twitter Logo Streamline Icon: https://streamlinehq.com