Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ClawMachine: Fetching Visual Tokens as An Entity for Referring and Grounding (2406.11327v1)

Published 17 Jun 2024 in cs.CV

Abstract: An essential topic for multimodal LLMs (MLLMs) is aligning vision and language concepts at a finer level. In particular, we devote efforts to encoding visual referential information for tasks such as referring and grounding. Existing methods, including proxy encoding and geometry encoding, incorporate additional syntax to encode the object's location, bringing extra burdens in training MLLMs to communicate between language and vision. This study presents ClawMachine, offering a new methodology that notates an entity directly using the visual tokens. It allows us to unify the prompt and answer of visual referential tasks without additional syntax. Upon a joint vision-language vocabulary, ClawMachine unifies visual referring and grounding into an auto-regressive format and learns with a decoder-only architecture. Experiments validate that our model achieves competitive performance across visual referring and grounding tasks with a reduced demand for training data. Additionally, ClawMachine demonstrates a native ability to integrate multi-source information for complex visual reasoning, which prior MLLMs can hardly perform without specific adaptions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tianren Ma (4 papers)
  2. Lingxi Xie (137 papers)
  3. Yunjie Tian (17 papers)
  4. Boyu Yang (10 papers)
  5. Yuan Zhang (331 papers)
  6. David Doermann (54 papers)
  7. Qixiang Ye (110 papers)