Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-Aligned Bench: Fine-Grained Assessment of Reasoning Ability in MLLMs vs. Humans (2505.11141v2)

Published 16 May 2025 in cs.CV and cs.AI

Abstract: The goal of achieving AGI is to imitate humans and surpass them. Models such as OpenAI's o1, o3, and DeepSeek's R1 have demonstrated that LLMs with human-like reasoning capabilities exhibit exceptional performance and are being gradually integrated into multimodal LLMs (MLLMs). However, whether these models possess capabilities comparable to humans in handling reasoning tasks remains unclear at present. In this paper, we propose Human-Aligned Bench, a benchmark for fine-grained alignment of multimodal reasoning with human performance. Specifically, we collected 9,794 multimodal questions that solely rely on contextual reasoning, including bilingual (Chinese and English) multimodal questions and pure text-based questions, encompassing four question types: visual reasoning, definition judgment, analogical reasoning, and logical judgment. More importantly, each question is accompanied by human success rates and options that humans are prone to choosing incorrectly. Extensive experiments on the Human-Aligned Bench reveal notable differences between the performance of current MLLMs in multimodal reasoning and human performance. The findings on our benchmark provide insights into the development of the next-generation models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yansheng Qiu (5 papers)
  2. Li Xiao (85 papers)
  3. Zhaopan Xu (15 papers)
  4. Pengfei Zhou (40 papers)
  5. Zheng Wang (400 papers)
  6. Kaipeng Zhang (73 papers)

Summary

We haven't generated a summary for this paper yet.