A Cognitive Paradigm Approach to Probe the Perception-Reasoning Interface in VLMs (2501.13620v5)
Abstract: A fundamental challenge in artificial intelligence involves understanding the cognitive mechanisms underlying visual reasoning in sophisticated models like Vision-LLMs (VLMs). How do these models integrate visual perception with abstract thought, especially when reasoning across multiple images or requiring fine-grained compositional understanding? Drawing inspiration from cognitive science, this paper introduces a structured evaluation framework using diverse visual reasoning tasks-Bongard Problems (BPs) and Winoground-to dissect the perception-reasoning interface in VLMs. We propose three distinct evaluation paradigms, mirroring human problem-solving strategies: Direct Visual Rule Learning (DVRL; holistic processing), Deductive Rule Learning (DRL; rule extraction and application), and Componential Analysis (CA; analytical decomposition via task-agnostic textual descriptions). These paradigms systematically vary cognitive load and probe processing stages. Notably, CA enables multi-image reasoning evaluation even for single-image architectures and isolates reasoning from perception by operating on textual descriptions. Applying this framework, we demonstrate that CA, leveraging powerful LLMs for reasoning over rich, independently generated descriptions, achieves new state-of-the-art (SOTA) performance on challenging benchmarks including Bongard-OpenWorld, Bongard-HOI, and Winoground. Ablation studies confirm reasoning improves significantly when perceptual challenges are mitigated, revealing a critical perception bottleneck. Our framework provides a valuable diagnostic tool and suggests that decoupling perception (via rich, task-agnostic description) from reasoning is a promising direction for robust and general visual intelligence.