Truly Assessing Fluid Intelligence of Large Language Models through Dynamic Reasoning Evaluation (2506.02648v1)
Abstract: Recent advances in LLMs have demonstrated impressive reasoning capacities that mirror human-like thinking. However, whether LLMs possess genuine fluid intelligence (i.e., the ability to reason abstractly and generalize rules in novel situations) remains an open question. Existing reasoning benchmarks either focus on domain-specific knowledge (crystallized intelligence) or lack interpretability. To address these limitations, we propose DRE-Bench, a dynamic reasoning evaluation benchmark grounded in a hierarchical cognitive framework. DRE-Bench consists of 36 abstract reasoning tasks organized across four cognitive levels, with each task featuring multiple dynamic variants that test the same underlying latent rule. This design enables fine-grained, interpretable, and reliable assessments of fluid intelligence. We evaluate a range of state-of-the-art LLMs, including both general LLMs (GPT-4o, Claude 3.7) and reasoning LLMs (o1, DeepSeek-R1, QwQ, Skywork-OR1). Experimental results reveal that although most LLMs achieve competent and robust performance in low-level cognition, they struggle with high-level cognition and exhibit limited generalization as task complexity grows. Our findings highlight the gap between current LLMs and true human-like fluid intelligence and offer a new path for systematically tracking reasoning progress in LLMs.
- Yue Yang (146 papers)
- MingKang Chen (1 paper)
- Qihua Liu (1 paper)
- Mengkang Hu (21 papers)
- Qiguang Chen (44 papers)
- Gengrui Zhang (10 papers)
- Shuyue Hu (27 papers)
- Guangtao Zhai (231 papers)
- Yu Qiao (563 papers)
- Yu Wang (939 papers)
- Wenqi Shao (89 papers)
- Ping Luo (340 papers)