Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Truly Assessing Fluid Intelligence of Large Language Models through Dynamic Reasoning Evaluation (2506.02648v1)

Published 3 Jun 2025 in cs.AI

Abstract: Recent advances in LLMs have demonstrated impressive reasoning capacities that mirror human-like thinking. However, whether LLMs possess genuine fluid intelligence (i.e., the ability to reason abstractly and generalize rules in novel situations) remains an open question. Existing reasoning benchmarks either focus on domain-specific knowledge (crystallized intelligence) or lack interpretability. To address these limitations, we propose DRE-Bench, a dynamic reasoning evaluation benchmark grounded in a hierarchical cognitive framework. DRE-Bench consists of 36 abstract reasoning tasks organized across four cognitive levels, with each task featuring multiple dynamic variants that test the same underlying latent rule. This design enables fine-grained, interpretable, and reliable assessments of fluid intelligence. We evaluate a range of state-of-the-art LLMs, including both general LLMs (GPT-4o, Claude 3.7) and reasoning LLMs (o1, DeepSeek-R1, QwQ, Skywork-OR1). Experimental results reveal that although most LLMs achieve competent and robust performance in low-level cognition, they struggle with high-level cognition and exhibit limited generalization as task complexity grows. Our findings highlight the gap between current LLMs and true human-like fluid intelligence and offer a new path for systematically tracking reasoning progress in LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Yue Yang (146 papers)
  2. MingKang Chen (1 paper)
  3. Qihua Liu (1 paper)
  4. Mengkang Hu (21 papers)
  5. Qiguang Chen (44 papers)
  6. Gengrui Zhang (10 papers)
  7. Shuyue Hu (27 papers)
  8. Guangtao Zhai (231 papers)
  9. Yu Qiao (563 papers)
  10. Yu Wang (939 papers)
  11. Wenqi Shao (89 papers)
  12. Ping Luo (340 papers)