Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models (2309.04461v2)

Published 8 Sep 2023 in cs.CL, cs.CV, and cs.LG
Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models

Abstract: Vision-LLMs (VLMs) have recently demonstrated strong efficacy as visual assistants that can parse natural queries about the visual content and generate human-like outputs. In this work, we explore the ability of these models to demonstrate human-like reasoning based on the perceived information. To address a crucial concern regarding the extent to which their reasoning capabilities are fully consistent and grounded, we also measure the reasoning consistency of these models. We achieve this by proposing a chain-of-thought (CoT) based consistency measure. However, such an evaluation requires a benchmark that encompasses both high-level inference and detailed reasoning chains, which is costly. We tackle this challenge by proposing a LLM-Human-in-the-Loop pipeline, which notably reduces cost while simultaneously ensuring the generation of a high-quality dataset. Based on this pipeline and the existing coarse-grained annotated dataset, we build the CURE benchmark to measure both the zero-shot reasoning performance and consistency of VLMs. We evaluate existing state-of-the-art VLMs, and find that even the best-performing model is unable to demonstrate strong visual reasoning capabilities and consistency, indicating that substantial efforts are required to enable VLMs to perform visual reasoning as systematically and consistently as humans. As an early step, we propose a two-stage training framework aimed at improving both the reasoning performance and consistency of VLMs. The first stage involves employing supervised fine-tuning of VLMs using step-by-step reasoning samples automatically generated by LLMs. In the second stage, we further augment the training process by incorporating feedback provided by LLMs to produce reasoning chains that are highly consistent and grounded. We empirically highlight the effectiveness of our framework in both reasoning performance and consistency.

Measuring and Improving Chain-of-Thought Reasoning in Vision-LLMs

This paper explores the capabilities and limitations of Vision-LLMs (VLMs) with respect to their reasoning consistency and performance, focusing on their ability to carry out human-like chain-of-thought (CoT) reasoning. The authors acknowledge VLMs' competence in responding to visual queries but underscore the necessity for models to exhibit systematic visual reasoning akin to human cognition. Highlighting discrepancies in reasoning consistency among state-of-the-art VLMs, the paper endeavors to refine both reasoning performance and consistency.

To quantify and enhance VLMs' reasoning capabilities, the paper introduces a benchmark named figs/cure.png, supported by an innovative LLM-Human-in-the-Loop pipeline for dataset creation. This benchmark specifically addresses the dual aim of measuring zero-shot reasoning performance and evaluating reasoning consistency. The authors reveal that even the most proficient VLMs fall short of achieving robust visual reasoning consistency, emphasizing a persistent gap when juxtaposed with human levels of inference accuracy.

The paper proposes a two-stage training framework to ameliorate this gap. The framework encompasses supervised fine-tuning followed by learning from feedback, devoid of human annotations. This approach aims to engender reasoning chains that are consistent, well-grounded, and enhance overall visual reasoning. The framework purportedly yields a relative improvement of 4% in reasoning performance and consistency, signifying a tangible advancement in VLM training methodologies.

From an empirical perspective, the paper evaluates current VLMs using figs/cure.png, comprising questions designed to gauge both overall reasoning and the quality of intermediate reasoning processes. Results indicate a reliance on the integration of LLMs and multimodal data to achieve significant inference performance. However, challenges remain, given that substantial room for improvement persists.

This research has profound implications for the development of VLMs. Enhancing reasoning consistency is crucial not only for improving existing models but also for guiding future advances in AI and multimodal learning. The findings suggest directions for future work, such as the integration of more comprehensive visual data sources and further refinement of the training procedures leveraging scalable datasets.

In conclusion, the paper makes a substantive contribution to the field of vision-LLMing by highlighting current limitations, proposing concrete methods for improvement, and offering a substantial dataset and benchmark for future exploration of visual reasoning in AI. The proposed framework, along with the figs/cure.png benchmark, lays a foundational groundwork for further investigations into the reasoning abilities of VLMs and their potential to more closely replicate human-like understanding.

This research trajectory might see future developments encompassing more robust models, capable of seamlessly integrating multimodal information to achieve a level of reasoning and consistency that closely mirrors that of human cognition, potentially revolutionizing the interface between humans and AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yangyi Chen (29 papers)
  2. Karan Sikka (32 papers)
  3. Michael Cogswell (19 papers)
  4. Heng Ji (266 papers)
  5. Ajay Divakaran (43 papers)
Citations (17)
Youtube Logo Streamline Icon: https://streamlinehq.com