Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A3VLM: Actionable Articulation-Aware Vision Language Model (2406.07549v2)

Published 11 Jun 2024 in cs.RO

Abstract: Vision LLMs (VLMs) have received significant attention in recent years in the robotics community. VLMs are shown to be able to perform complex visual reasoning and scene understanding tasks, which makes them regarded as a potential universal solution for general robotics problems such as manipulation and navigation. However, previous VLMs for robotics such as RT-1, RT-2, and ManipLLM have focused on directly learning robot-centric actions. Such approaches require collecting a significant amount of robot interaction data, which is extremely costly in the real world. Thus, we propose A3VLM, an object-centric, actionable, articulation-aware vision LLM. A3VLM focuses on the articulation structure and action affordances of objects. Its representation is robot-agnostic and can be translated into robot actions using simple action primitives. Extensive experiments in both simulation benchmarks and real-world settings demonstrate the effectiveness and stability of A3VLM. We release our code and other materials at https://github.com/changhaonan/A3VLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Siyuan Huang (123 papers)
  2. Haonan Chang (16 papers)
  3. Yuhan Liu (103 papers)
  4. Yimeng Zhu (4 papers)
  5. Hao Dong (175 papers)
  6. Peng Gao (401 papers)
  7. Abdeslam Boularias (49 papers)
  8. Hongsheng Li (340 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com