Enhancing Advanced Visual Reasoning Ability of Large Language Models (2409.13980v1)
Abstract: Recent advancements in Vision-Language (VL) research have sparked new benchmarks for complex visual reasoning, challenging models' advanced reasoning ability. Traditional Vision-LLMs (VLMs) perform well in visual perception tasks while struggling with complex reasoning scenarios. Conversely, LLMs demonstrate robust text reasoning capabilities; however, they lack visual acuity. To bridge this gap, we propose Complex Visual Reasoning LLMs (CVR-LLM), capitalizing on VLMs' visual perception proficiency and LLMs' extensive reasoning capability. Unlike recent multimodal LLMs (MLLMs) that require a projection layer, our approach transforms images into detailed, context-aware descriptions using an iterative self-refinement loop and leverages LLMs' text knowledge for accurate predictions without extra training. We also introduce a novel multi-modal in-context learning (ICL) methodology to enhance LLMs' contextual understanding and reasoning. Additionally, we introduce Chain-of-Comparison (CoC), a step-by-step comparison technique enabling contrasting various aspects of predictions. Our CVR-LLM presents the first comprehensive study across a wide array of complex visual reasoning tasks and achieves SOTA performance among all.
- Zhiyuan Li (304 papers)
- Dongnan Liu (47 papers)
- Chaoyi Zhang (51 papers)
- Heng Wang (136 papers)
- Tengfei Xue (23 papers)
- Weidong Cai (118 papers)