Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaffolding Coordinates to Promote Vision-Language Coordination in Large Multi-Modal Models (2402.12058v1)

Published 19 Feb 2024 in cs.CV and cs.CL

Abstract: State-of-the-art Large Multi-Modal Models (LMMs) have demonstrated exceptional capabilities in vision-language tasks. Despite their advanced functionalities, the performances of LMMs are still limited in challenging scenarios that require complex reasoning with multiple levels of visual information. Existing prompting techniques for LMMs focus on either improving textual reasoning or leveraging tools for image preprocessing, lacking a simple and general visual prompting scheme to promote vision-language coordination in LMMs. In this work, we propose Scaffold prompting that scaffolds coordinates to promote vision-language coordination. Specifically, Scaffold overlays a dot matrix within the image as visual information anchors and leverages multi-dimensional coordinates as textual positional references. Extensive experiments on a wide range of challenging vision-language tasks demonstrate the superiority of Scaffold over GPT-4V with the textual CoT prompting. Our code is released in https://github.com/leixy20/Scaffold.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xuanyu Lei (10 papers)
  2. Zonghan Yang (23 papers)
  3. Xinrui Chen (6 papers)
  4. Peng Li (390 papers)
  5. Yang Liu (2253 papers)
Citations (18)
X Twitter Logo Streamline Icon: https://streamlinehq.com