LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning (2311.18651v1)
Abstract: Recent advances in Large Multimodal Models (LMM) have made it possible for various applications in human-machine interactions. However, developing LMMs that can comprehend, reason, and plan in complex and diverse 3D environments remains a challenging topic, especially considering the demand for understanding permutation-invariant point cloud 3D representations of the 3D scene. Existing works seek help from multi-view images, and project 2D features to 3D space as 3D scene representations. This, however, leads to huge computational overhead and performance degradation. In this paper, we present LL3DA, a Large Language 3D Assistant that takes point cloud as direct input and respond to both textual-instructions and visual-prompts. This help LMMs better comprehend human interactions and further help to remove the ambiguities in cluttered 3D scenes. Experiments show that LL3DA achieves remarkable results, and surpasses various 3D vision-LLMs on both 3D Dense Captioning and 3D Question Answering.
- Sijin Chen (12 papers)
- Xin Chen (456 papers)
- Chi Zhang (566 papers)
- Mingsheng Li (9 papers)
- Gang Yu (114 papers)
- Hao Fei (105 papers)
- Hongyuan Zhu (36 papers)
- Jiayuan Fan (29 papers)
- Tao Chen (397 papers)